345 research outputs found

    The Integrated Process Model For Learning Organization

    Get PDF
    This paper proposes an integrated process model of a learning organization, which consists of six processes - “sensing”, “innovating”, “selecting”, “implementing”, “diffusing”, “feedback”, and one base - “knowledge base & knowledge management”. Based on the literature review and the interviews with multinational corporations in China, this model integrates the important organizational learning processes and knowledge management to reflect the reality of learning organizations comprehensively. Based on the integrated model, key elements that affect organizational learning processes are identified. Finally, the contributions, implications, limitation, and future research of the integrated process model are discussed

    Modelling and scheduling of heterogeneous computing systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A New Scheme and Microstructural Model for 3D Full 5-directional Braided Composites

    Get PDF
    AbstractThree-dimensional(3D) braided composites are a kind of advanced ones and are used in the aeronautical and astronautical fields more widely. The advantages, usages, shortages and disadvantages of 3D braided composites are analyzed, and the possible approach of improving the properties of the materials is presented, that is, a new type of 3D full 5-directional braided composites is developed. The methods of making this type of preform are proposed. It is pointed out that the four-step braiding which is the most possible to realize industrialized production almost has no effect on the composites'properties. By analyzing the simulation model, the advantages of the material compared with the 3D 4-di- rectional and 5-directional materials are presented. Finally, a microstructural model is analyzed to lay the foundation for the future theoretical analysis of these composites

    ON THE SAMPLING OF SERIAL SECTIONING TECHNIQUE FOR THREE DIMENSIONAL SPACE-FILLING GRAIN STRUCTURES

    Get PDF
    Serial sectioning technique provides plenty of quantitative geometric information of the microstructure analyzed, including those unavailable from stereology with one- and two-dimensional probes. This may be why it used to be and is being continuously served as one of the most common and invaluable methods to study the size and the size distribution, the topology and the distribution of topology parameters, and even the shape of three-dimensional space filling grains or cells. On the other hand, requiring tedious lab work, the method is also very time and energy consuming, most often only less than one hundred grains per sample were sampled and measured in almost all reported practice. Thus, a question is often asked: for typical microstructures in engineering materials, are so many grains or cells sampled adequate to obtain reliable results from this technique? To answer this question, experimental data of 1292 contiguous austenite grains in a low-carbon steel specimen obtained from the serial sectioning analysis are presented in this paper, which demonstrates the effect of sampling on the measurement of various parameters of grain size distribution and of the grain topology distribution. The result provides one of rules of thumb for grain stereology of similar microstructures

    Semantic-aware Consistency Network for Cloth-changing Person Re-Identification

    Full text link
    Cloth-changing Person Re-Identification (CC-ReID) is a challenging task that aims to retrieve the target person across multiple surveillance cameras when clothing changes might happen. Despite recent progress in CC-ReID, existing approaches are still hindered by the interference of clothing variations since they lack effective constraints to keep the model consistently focused on clothing-irrelevant regions. To address this issue, we present a Semantic-aware Consistency Network (SCNet) to learn identity-related semantic features by proposing effective consistency constraints. Specifically, we generate the black-clothing image by erasing pixels in the clothing area, which explicitly mitigates the interference from clothing variations. In addition, to fully exploit the fine-grained identity information, a head-enhanced attention module is introduced, which learns soft attention maps by utilizing the proposed part-based matching loss to highlight head information. We further design a semantic consistency loss to facilitate the learning of high-level identity-related semantic features, forcing the model to focus on semantically consistent cloth-irrelevant regions. By using the consistency constraint, our model does not require any extra auxiliary segmentation module to generate the black-clothing image or locate the head region during the inference stage. Extensive experiments on four cloth-changing person Re-ID datasets (LTCC, PRCC, Vc-Clothes, and DeepChange) demonstrate that our proposed SCNet makes significant improvements over prior state-of-the-art approaches. Our code is available at: https://github.com/Gpn-star/SCNet.Comment: Accepted by ACM MM 202

    Kinematics Based Visual Localization for Skid-Steering Robots: Algorithm and Theory

    Full text link
    To build commercial robots, skid-steering mechanical design is of increased popularity due to its manufacturing simplicity and unique mechanism. However, these also cause significant challenges on software and algorithm design, especially for pose estimation (i.e., determining the robot's rotation and position), which is the prerequisite of autonomous navigation. While the general localization algorithms have been extensively studied in research communities, there are still fundamental problems that need to be resolved for localizing skid-steering robots that change their orientation with a skid. To tackle this problem, we propose a probabilistic sliding-window estimator dedicated to skid-steering robots, using measurements from a monocular camera, the wheel encoders, and optionally an inertial measurement unit (IMU). Specifically, we explicitly model the kinematics of skid-steering robots by both track instantaneous centers of rotation (ICRs) and correction factors, which are capable of compensating for the complexity of track-to-terrain interaction, the imperfectness of mechanical design, terrain conditions and smoothness, and so on. To prevent performance reduction in robots' lifelong missions, the time- and location- varying kinematic parameters are estimated online along with pose estimation states in a tightly-coupled manner. More importantly, we conduct in-depth observability analysis for different sensors and design configurations in this paper, which provides us with theoretical tools in making the correct choice when building real commercial robots. In our experiments, we validate the proposed method by both simulation tests and real-world experiments, which demonstrate that our method outperforms competing methods by wide margins.Comment: 18 pages in tota

    CodeVIO: Visual-Inertial Odometry with Learned Optimizable Dense Depth

    Full text link
    In this work, we present a lightweight, tightly-coupled deep depth network and visual-inertial odometry (VIO) system, which can provide accurate state estimates and dense depth maps of the immediate surroundings. Leveraging the proposed lightweight Conditional Variational Autoencoder (CVAE) for depth inference and encoding, we provide the network with previously marginalized sparse features from VIO to increase the accuracy of initial depth prediction and generalization capability. The compact encoded depth maps are then updated jointly with navigation states in a sliding window estimator in order to provide the dense local scene geometry. We additionally propose a novel method to obtain the CVAE's Jacobian which is shown to be more than an order of magnitude faster than previous works, and we additionally leverage First-Estimate Jacobian (FEJ) to avoid recalculation. As opposed to previous works relying on completely dense residuals, we propose to only provide sparse measurements to update the depth code and show through careful experimentation that our choice of sparse measurements and FEJs can still significantly improve the estimated depth maps. Our full system also exhibits state-of-the-art pose estimation accuracy, and we show that it can run in real-time with single-thread execution while utilizing GPU acceleration only for the network and code Jacobian.Comment: 6 Figure
    corecore