2 research outputs found

    Design and Construction of an Automatic Home and Office Power Control System

    Get PDF
    In homes and offices, it is very common for occupants to forget to switch OFF the lighting and fans when leaving the premises. This can be attributed to human forgetfulness and the epileptic power supply which causes interruption that results in users forgetting the state of their appliances (whether they are ON or OFF). Consequently, these appliances would continue to work whenever power is restored when the occupants might have vacated the premise. This action is not a small contributor to energy wastage in a country like Nigeria where there is an inadequate energy supply to go round the populace. In this work, a simple but robust automatic home and office power control system is developed to auto-detect the presence of an occupant in the room through the passive infrared (PIR) sensor and control the electrical appliances (lighting and fan source) in the room. Certain conditions must be met for the operation of lighting and the fan source. The lighting comes up when the PIR sensor senses the presence of an occupant and the room is in darkness, while the fan would work when there is an occupant and the temperature in the room is above 35 °C. These conditions are programmed to suit the need of the occupant but cannot be changed by the user. The device automatically switches OFF within five minutes after the last occupant leaves the room

    SIFT-CNN Pipeline in Livestock Management: A Drone Image Stitching Algorithm

    No full text
    Images taken by drones often must be preprocessed and stitched together due to the inherent noise, narrow imaging breadth, flying height, and angle of view. Conventional UAV feature-based image stitching techniques significantly rely on the quality of feature identification, made possible by image pixels, which frequently fail to stitch together images with few features or low resolution. Furthermore, later approaches were developed to eliminate the issues with conventional methods by using the deep learning-based stitching technique to collect the general attributes of remote sensing images before they were stitched. However, since the images have empty backgrounds classified as stitched points, it is challenging to distinguish livestock in a grazing area. Consequently, less information can be inferred from the surveillance data. This study provides a four-stage object-based image stitching technique that, before stitching, removes the background’s space and classifies images in the grazing field. In the first stage, the drone-based image sequence of the livestock on the grazing field is preprocessed. In the second stage, the images of the cattle on the grazing field are classified to eliminate the empty spaces or backgrounds. The third stage uses the improved SIFT to detect the feature points of the classified images to o8btain the feature point descriptor. Lastly, the stitching area is computed using the image projection transformation
    corecore