23 research outputs found

    Sign Language Detection

    Full text link
    With the advancements in Computer vision techniques the need to classify images based on its features have become a huge task and necessity. In this project we proposed 2 models i.e. feature extraction and classification using ORB and SVM and the second is using CNN architecture. The end result of the project is to understand the concept behind feature extraction and image classification. The trained CNN model will also be used to convert it to tflite format for Android Development.Comment: 8 pages, 10 figure

    Enhancement of Battery Life by using Efficient Energy Monitoring System

    Get PDF
    Now a days, every sector is facing challenges regarding deficiency of getting uninterrupted power supply. To satisfy the need of electrical energy of every individual is a challenge for the power engineers. They must think over the alternative energy sources such as solar, wind, hybrid system, tidal etc. Till now, we primarily relied upon Fossil fuels to meet the requirement of our energy needs. In coming years, due to increasing demand, there will be a significant need for implementation of renewable energy as a primary source of energy. Productive use of electrical energy is the basis for long-term sustainable economic development. As we know Solar energy is the ultimate renewable source of energy. Harnessing its energy holds great promise for the world energy crises, and it will be heavily called upon as fossil fuels are being depleted. Here the comparison of traditional and renewable energy sources on the basis of observed and calculated circuit parameters with the help of an energy monitoring system. The hardware system used in this study comprises of several technologies like PLC (Programmable logic controller), SCADA (Supervisory Control and Data Acquisition) along with the sources to be compared

    SANIP: Shopping Assistant and Navigation for the visually impaired

    Full text link
    The proposed shopping assistant model SANIP is going to help blind persons to detect hand held objects and also to get a video feedback of the information retrieved from the detected and recognized objects. The proposed model consists of three python models i.e. Custom Object Detection, Text Detection and Barcode detection. For object detection of the hand held object, we have created our own custom dataset that comprises daily goods such as Parle-G, Tide, and Lays. Other than that we have also collected images of Cart and Exit signs as it is essential for any person to use a cart and also notice the exit sign in case of emergency. For the other 2 models proposed the text and barcode information retrieved is converted from text to speech and relayed to the Blind person. The model was used to detect objects that were trained on and was successful in detecting and recognizing the desired output with a good accuracy and precision.Comment: 6 pages, 8 figures. arXiv admin note: text overlap with arXiv:2011.04244 by other author
    corecore