9,269 research outputs found

    Application on character recognition system on road sign for visually impaired: case study approach and future

    Get PDF
    Many visually impaired people worldwide are unable to travel safely and autonomously because they are physically unable to perceive effective visual information during their daily lives. In this research, we study how to extract the character information of the road sign and transmit it to the visually impaired effectively, so they can understand easier. Experimental method is to apply the Maximally Stable External Region and Stroke Width Transform method in Phase I so that the visually impaired person can recognize the letters on the road signs. It is to convey text information to the disabled. The result of Phase I using samples of simple road signs was to extract the sign information after dividing the exact character area, but the accuracy was not good for the Hangul (Korean characters) information. The initial experimental results in the Phase II succeeded in transmitting the text information on Phase I to the visually impaired. In the future, it will be required to develop a wearable character recognition system that can be attached to the visually impaired. In order to perform this task, we need to develop and verify a miniaturized and wearable character recognition system. In this paper, we examined the method of recognizing road sign characters on the road and presented a possibility that may be applicable to our final development

    Augmented reality applied to language translation

    Get PDF
    Being a tourist in a foreign country is an adventure full of memories and experiences, but it can be truly challenging when it comes to communication. Finding yourself in an unknown place, where all the road signs and guidelines have such different characters, may end up in a dead end or with some unexpected results. Then, what if we could use a smartphone to read that restaurant menu? Or even find the right department in a mall? The applications are so many and the market is ready to invest and give opportunities to creative and economic ideas. The dissertation intends to explore the field of Augmented Reality, while helping the user to enrich his view with information. Giving the ability to look around, detect the text in the surroundings and read its translation in our own dialect, is a great step to overcome language issues. Moreover, using smartphones at anyone’s reach, or wearing smartglasses that are even less intrusive, gives a chance to engage a complex matter in a daily routine. This technology requires flexible, accurate and fast Optical Character Recognition and Translation systems, in an Internet of Things scenery. Quality and precision is a must, yet to be further developed and improved. Entering in a realtime digital data environment, will support great causes and aid the progress and evolution of many intervention areas

    VANET Applications: Hot Use Cases

    Get PDF
    Current challenges of car manufacturers are to make roads safe, to achieve free flowing traffic with few congestions, and to reduce pollution by an effective fuel use. To reach these goals, many improvements are performed in-car, but more and more approaches rely on connected cars with communication capabilities between cars, with an infrastructure, or with IoT devices. Monitoring and coordinating vehicles allow then to compute intelligent ways of transportation. Connected cars have introduced a new way of thinking cars - not only as a mean for a driver to go from A to B, but as smart cars - a user extension like the smartphone today. In this report, we introduce concepts and specific vocabulary in order to classify current innovations or ideas on the emerging topic of smart car. We present a graphical categorization showing this evolution in function of the societal evolution. Different perspectives are adopted: a vehicle-centric view, a vehicle-network view, and a user-centric view; described by simple and complex use-cases and illustrated by a list of emerging and current projects from the academic and industrial worlds. We identified an empty space in innovation between the user and his car: paradoxically even if they are both in interaction, they are separated through different application uses. Future challenge is to interlace social concerns of the user within an intelligent and efficient driving

    The Emerging Internet of Things Marketplace From an Industrial Perspective: A Survey

    Get PDF
    The Internet of Things (IoT) is a dynamic global information network consisting of internet-connected objects, such as Radio-frequency identification (RFIDs), sensors, actuators, as well as other instruments and smart appliances that are becoming an integral component of the future internet. Over the last decade, we have seen a large number of the IoT solutions developed by start-ups, small and medium enterprises, large corporations, academic research institutes (such as universities), and private and public research organisations making their way into the market. In this paper, we survey over one hundred IoT smart solutions in the marketplace and examine them closely in order to identify the technologies used, functionalities, and applications. More importantly, we identify the trends, opportunities and open challenges in the industry-based the IoT solutions. Based on the application domain, we classify and discuss these solutions under five different categories: smart wearable, smart home, smart, city, smart environment, and smart enterprise. This survey is intended to serve as a guideline and conceptual framework for future research in the IoT and to motivate and inspire further developments. It also provides a systematic exploration of existing research and suggests a number of potentially significant research directions.Comment: IEEE Transactions on Emerging Topics in Computing 201

    Proceedings of the 3rd IUI Workshop on Interacting with Smart Objects

    Get PDF
    These are the Proceedings of the 3rd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    Digital Labeling and Narrative Mapping in Mobile Remote Audio Signage: Verbalization of Routes and Generation of New Verbal Route Descriptions from Existing Route Sets

    Get PDF
    Independent navigation is a great challenge for people with visual impairments. In this project, we have designed and implemented an assisted navigation solution based on the ability of visually impaired travelers to interpret and contextualize verbal route descriptions. Previous studies have validated that if a route is verbally described in sufficient and appropriate manner then VI can use their orientation and mobility skills to successfully follow the route. In this project, we do not consider the issue how the VI will interpret the route descriptions, but we aim to identify and generate new verbal route descriptions from the existing route descriptions. We discuss different algorithms that we have used for extracting the landmarks, building graphs and generation of new route descriptions from existing route info

    Chinese Character Translator on Mobile Phone using Optical Character Recognition and Bing Translator API

    Get PDF
    Chinese language is one of the international languages whose have users almost 35% of the worlds population. Nonetheless the Chinese language has problems in learning how to write and how to read because it is in the form of characters or symbols so that it is more difficult to learn it. Chinese characters that used today is simplified Chinese character with approximately 3000 common characters that daily used. This character / symbol can also be written in Latin alphabet form called hanzi / hanyu pinyin. Some application developers such as Yellow Bridge, Google, Qhanzi, and Bing have provided translator applications from the Chinese characters to the Latin alphabet and vice versa. The application provided is generally still web-based and does not involve the ability to input the shape of a Chinese character in the form of an image, for example image input either from a file or directly from a camera input. This research try to build a Chinese character translator application using Tesseract Optical Character Recognition (OCR) Engine to retrieve the Chinese characters from the image, then translate it using a translator on the Bing API. This application will running on mobile phone. So the user could use image or mobile phone camera as an input. The test results show that the application can operate on various Android devices. OCR Engine has been able to perform the translation function with 74% success rate. The input image could have tolerance angle of approximately 15 degrees

    Chinese Character Translator on Mobile Phone using Optical Character Recognition and Bing Translator API

    Get PDF
    Chinese language is one of the international languages whose have users almost 35% of the worlds population. Nonetheless the Chinese language has problems in learning how to write and how to read because it is in the form of characters or symbols so that it is more difficult to learn it. Chinese characters that used today is simplified Chinese character with approximately 3000 common characters that daily used. This character / symbol can also be written in Latin alphabet form called hanzi / hanyu pinyin. Some application developers such as Yellow Bridge, Google, Qhanzi, and Bing have provided translator applications from the Chinese characters to the Latin alphabet and vice versa. The application provided is generally still web-based and does not involve the ability to input the shape of a Chinese character in the form of an image, for example image input either from a file or directly from a camera input. This research try to build a Chinese character translator application using Tesseract Optical Character Recognition (OCR) Engine to retrieve the Chinese characters from the image, then translate it using a translator on the Bing API. This application will running on mobile phone. So the user could use image or mobile phone camera as an input. The test results show that the application can operate on various Android devices. OCR Engine has been able to perform the translation function with 74% success rate. The input image could have tolerance angle of approximately 15 degrees

    Defining Traffic Scenarios for the Visually Impaired

    Get PDF
    For the development of a transfer concept of camera-based object detections from Advanced Driver Assistance Systems to the assistance of the visually impaired, we define relevant traffic scenarios and vision use cases by means of problem-centered interviews with four experts and ten members of the target group. We identify the six traffic scenarios: general orientation, navigating to an address, crossing a road, obstacle avoidance, boarding a bus, and at the train station clustered into the three categories: Orientation, Pedestrian, and Public Transport. Based on the data, we describe each traffic scenario and derive a summarizing table adapted from software engineering resulting in a collection of vision use cases. The ones that are also of interest in Advanced Driver Assistance Systems – Bicycle, Crosswalk, Traffic Sign, Traffic Light (State), Driving Vehicle, Obstacle, and Lane Detection – build the foundation of our future work. Furthermore, we present social insights that we gained from the interviews and discuss the indications we gather by considering the importance of the identified use cases for each interviewed member of the target group
    • …
    corecore