10 research outputs found

    Aging Detection of Electrical Point Machines Based on Support Vector Data Description

    No full text
    Electrical point machines (EPM) must be replaced at an appropriate time to prevent the occurrence of operational safety or stability problems in trains resulting from aging or budget constraints. However, it is difficult to replace EPMs effectively because the aging conditions of EPMs depend on the operating environments, and thus, a guideline is typically not be suitable for replacing EPMs at the most timely moment. In this study, we propose a method of classification for the detection of an aging effect to facilitate the timely replacement of EPMs. We employ support vector data description to segregate data of “aged” and “not-yet-aged” equipment by analyzing the subtle differences in normalized electrical signals resulting from aging. Based on the before and after-replacement data that was obtained from experimental studies that were conducted on EPMs, we confirmed that the proposed method was capable of classifying machines based on exhibited aging effects with adequate accuracy

    Fast Pig Detection with a Top-View Camera under Various Illumination Conditions

    No full text
    The fast detection of pigs is a crucial aspect for a surveillance environment intended for the ultimate purpose of the 24 h tracking of individual pigs. Particularly, in a realistic pig farm environment, one should consider various illumination conditions such as sunlight, but such consideration has not been reported yet. We propose a fast method to detect pigs under various illumination conditions by exploiting the complementary information from depth and infrared images. By applying spatiotemporal interpolation, we first remove the noises caused by sunlight. Then, we carefully analyze the characteristics of both the depth and infrared information and detect pigs using only simple image processing techniques. Rather than exploiting highly time-consuming techniques, such as frequency-, optimization-, or deep learning-based detections, our image processing-based method can guarantee a fast execution time for the final goal, i.e., intelligent pig monitoring applications. In the experimental results, pigs could be detected effectively through the proposed method for both accuracy (i.e., 0.79) and execution time (i.e., 8.71 ms), even with various illumination conditions

    Replacement Condition Detection of Railway Point Machines Using an Electric Current Sensor

    No full text
    Detecting replacement conditions of railway point machines is important to simultaneously satisfy the budget-limit and train-safety requirements. In this study, we consider classification of the subtle differences in the aging effect—using electric current shape analysis—for the purpose of replacement condition detection of railway point machines. After analyzing the shapes of after-replacement data and then labeling the shapes of each before-replacement data, we can derive the criteria that can handle the subtle differences between “does-not-need-to-be-replaced” and “needs-to-be-replaced” shapes. On the basis of the experimental results with in-field replacement data, we confirmed that the proposed method could detect the replacement conditions with acceptable accuracy, as well as provide visual interpretability of the criteria used for the time-series classification

    A Kinect-Based Segmentation of Touching-Pigs for Real-Time Monitoring

    No full text
    Segmenting touching-pigs in real-time is an important issue for surveillance cameras intended for the 24-h tracking of individual pigs. However, methods to do so have not yet been reported. We particularly focus on the segmentation of touching-pigs in a crowded pig room with low-contrast images obtained using a Kinect depth sensor. We reduce the execution time by combining object detection techniques based on a convolutional neural network (CNN) with image processing techniques instead of applying time-consuming operations, such as optimization-based segmentation. We first apply the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO) to solve the separation problem for touching-pigs. If the quality of the YOLO output is not satisfied, then we try to find the possible boundary line between the touching-pigs by analyzing the shape. Our experimental results show that this method is effective to separate touching-pigs in terms of both accuracy (i.e., 91.96%) and execution time (i.e., real-time execution), even with low-contrast images obtained using a Kinect depth sensor

    Depth-Based Detection of Standing-Pigs in Moving Noise Environments

    No full text
    In a surveillance camera environment, the detection of standing-pigs in real-time is an important issue towards the final goal of 24-h tracking of individual pigs. In this study, we focus on depth-based detection of standing-pigs with “moving noises”, which appear every night in a commercial pig farm, but have not been reported yet. We first apply a spatiotemporal interpolation technique to remove the moving noises occurring in the depth images. Then, we detect the standing-pigs by utilizing the undefined depth values around them. Our experimental results show that this method is effective for detecting standing-pigs at night, in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (i.e., 94.47%), even with severe moving noises occluding up to half of an input depth image. Furthermore, without any time-consuming technique, the proposed method can be executed in real-time

    A fully roll-to-roll gravure-printed carbon nanotube-based active matrix for multi-touch sensors.

    No full text
    Roll-to-roll (R2R) printing has been pursued as a commercially viable high-throughput technology to manufacture flexible, disposable, and inexpensive printed electronic devices. However, in recent years, pessimism has prevailed because of the barriers faced when attempting to fabricate and integrate thin film transistors (TFTs) using an R2R printing method. In this paper, we report 20 Ă— 20 active matrices (AMs) based on single-walled carbon nanotubes (SWCNTs) with a resolution of 9.3 points per inch (ppi) resolution, obtained using a fully R2R gravure printing process. By using SWCNTs as the semiconducting layer and poly(ethylene terephthalate) (PET) as the substrate, we have obtained a device yield above 98%, and extracted the key scalability factors required for a feasible R2R gravure manufacturing process. Multi-touch sensor arrays were achieved by laminating a pressure sensitive rubber onto the SWCNT-TFT AM. This R2R gravure printing system overcomes the barriers associated with the registration accuracy of printing each layer and the variation of the threshold voltage (Vth). By overcoming these barriers, the R2R gravure printing method can be viable as an advanced manufacturing technology, thus enabling the high-throughput production of flexible, disposable, and human-interactive cutting-edge electronic devices based on SWCNT-TFT AMs
    corecore