1,129 research outputs found

    Securing Interactive Sessions Using Mobile Device through Visual Channel and Visual Inspection

    Full text link
    Communication channel established from a display to a device's camera is known as visual channel, and it is helpful in securing key exchange protocol. In this paper, we study how visual channel can be exploited by a network terminal and mobile device to jointly verify information in an interactive session, and how such information can be jointly presented in a user-friendly manner, taking into account that the mobile device can only capture and display a small region, and the user may only want to authenticate selective regions-of-interests. Motivated by applications in Kiosk computing and multi-factor authentication, we consider three security models: (1) the mobile device is trusted, (2) at most one of the terminal or the mobile device is dishonest, and (3) both the terminal and device are dishonest but they do not collude or communicate. We give two protocols and investigate them under the abovementioned models. We point out a form of replay attack that renders some other straightforward implementations cumbersome to use. To enhance user-friendliness, we propose a solution using visual cues embedded into the 2D barcodes and incorporate the framework of "augmented reality" for easy verifications through visual inspection. We give a proof-of-concept implementation to show that our scheme is feasible in practice.Comment: 16 pages, 10 figure

    The use of alignment cells in MMCC barcode

    Get PDF
    The QR code, a monochrome 2D barcode, is a popular and commonly used barcode system worldwide. QR codes can easily be read using a mobile phone with the appropriate decoder. As there is an increasing need for higher data capacity barcodes, some newer 2D barcodes, such as the MMCC code, have adopted the use of colour. However, the use of colour introduces more challenges for mobile phone decoders than with monochrome codes. In this paper, the use of alignment cells within the MMCC code is proposed to improve the robustness of the colour barcode when used in a mobile environment. With the addition of the alignment cells, the MMCC code is shown to achieve high data capacity even with a smaller physical size and the limitations of mobile phone cameras

    AirCode: Unobtrusive Physical Tags for Digital Fabrication

    Full text link
    We present AirCode, a technique that allows the user to tag physically fabricated objects with given information. An AirCode tag consists of a group of carefully designed air pockets placed beneath the object surface. These air pockets are easily produced during the fabrication process of the object, without any additional material or postprocessing. Meanwhile, the air pockets affect only the scattering light transport under the surface, and thus are hard to notice to our naked eyes. But, by using a computational imaging method, the tags become detectable. We present a tool that automates the design of air pockets for the user to encode information. AirCode system also allows the user to retrieve the information from captured images via a robust decoding algorithm. We demonstrate our tagging technique with applications for metadata embedding, robotic grasping, as well as conveying object affordances.Comment: ACM UIST 2017 Technical Paper

    Distance transform and template matching based methods for localization of barcodes and QR codes

    Get PDF
    Visual codes play an important role in automatic identification, which became an inseparable part of industrial processes. Thanks to the revolution of smartphones and telecommunication, it also becomes more and more popular in everyday life, containing embedded web addresses or other small informative texts. While barcode reading is straightforward in images having optimal parameters (fo cus, illumination, code orientation, and position), localization of code regions is still challenging in many scenarios. Every setup has its own characteristics, there fore many approaches are justifiable. Industrial applications are likely to have more fixed parameters like illumination, camera type and code size, and processing speed and accuracy are the most important requirements. In everyday use, like with smart phone cameras, a wide variety of code types, sizes, noise levels and blurring can be observed, but the processing speed is often not crucial, and the image acquisition process can be repeated in order for successful detection. In this paper, we address this problem with two novel methods for localization of 1D barcodes based on template matching and distance transformation, and a third method for QR codes. Our proposed approaches can simultaneously localize sev eral different types of codes. We compare the effectiveness of the proposed methods with several approaches from the literature using public databases and a large set of synthetic images as a benchmark. The evaluation shows that the proposed methods are efficient, having 84.3% Jaccard accuracy, superior to other approaches. One of the presented approaches is an improvement on our previous work. Our template matching based method is computationally more complex, however, it can be adapted to specific code types producing high accuracy. The other method uses distance transformation, which is fast and gives rough regions of interests that can contain valid visual code candidates

    A 'Human-in-the-Loop' Mobile Image Recognition Application for Rapid Scanning of Water Quality Test Results

    No full text
    This paper describes an interactive system for drinking water quality testing in small community supplies, particularly in the developing world. The system combines a lowcost field test (the Aquatest field kit), a mobile phone for data processing and communications, and a human operator who is able to react immediately to a test result. Once a water sample has been collected and incubated, the mobile phone camera is used to 'scan' the test and obtain the result, which is displayed to the user along with information about the health implications of the water quality. Initial prototypes, while not yet sufficiently robust for real-world use, demonstrate that the system is technically feasible. This opens up interesting possibilities for wider use of 'human-in-the-loop' sensor systems in environmental monitoring

    The implementation of warehouse management system at small and medium sized entreprises

    Get PDF
    A combination of research methodology approaches has been employed in this paper. This includes a theoretical framework that elaborates the problem identification and the existing supply chain process for introducing an automated Warehouse Management System, followed by a detailed literature review regarding the complemented supply chain software and hardware to ensure the success of the new architecture within the warehouse. The work project involves the critical success factors as well as the key challenges towards a smart Warehouse Management System. A practical application of a Tunisian medium-sized textile company illustrates the logistics dynamics after integrating the new management process

    Fibers and fabrics for chemical and biological sensing

    Get PDF
    Wearable sensors can be used to monitor many interesting parameters about the wearer’s physiology and environment, with important applications in personal health and well-being, sports performance and personal safety. Wearable chemical sensors can monitor the status of the wearer by accessing body fluids, such as sweat, in an unobtrusive manner. They can also be used to protect the wearer from hazards in the environment by sampling potentially harmful gas emissions such as carbon monoxide. Integrating chemical sensors into textile structures is a challenging and complex task. Issues which must be considered include sample collection, calibration, waste handling, fouling and reliability. Sensors must also be durable and comfortable to wear. Here we present examples of wearable chemical sensors that monitor the person and also their environment. We also discuss the issues involved in developing wearable chemical sensors and strategies for sensor design and textile integration

    Data matrix based low cost Autonomous detection of medicine packages

    Get PDF
    Counterfeit medicine is still a crucial problem for healthcare systems, having a huge impact in worldwide health and economy. Medicine packages can be traced from the moment of their production until they are delivered to the costumers through the use of Data Matrix codes, unique identifiers that can validate their authenticity. Currently, many practitioners at hospital pharmacies have to manually scan such codes one by one, a very repetitive and burdensome task. In this paper, a system which can simultaneously scan multiple Data Matrix codes and autonomously introduce them into an authentication database is proposed for the Hospital Pharmacy of the Centro Hospitalar de Vila Nova de Gaia/Espinho, E.P.E. Relevant features are its low cost and its seamless integration in their infrastructure. The results of the experiments were encouraging, and with upgrades such as real-time feedback of the code’s validation and increased robustness of the hardware system, it is expected that the system can be used as a real support to the pharmacists.This work is financed by National Funds through the Portuguese funding agency FCT - Fundação para a Ciência e Tecnologia, within the project LA/P/0063/2020.info:eu-repo/semantics/publishedVersio

    Vision Based Extraction of Nutrition Information from Skewed Nutrition Labels

    Get PDF
    An important component of a healthy diet is the comprehension and retention of nutritional information and understanding of how different food items and nutritional constituents affect our bodies. In the U.S. and many other countries, nutritional information is primarily conveyed to consumers through nutrition labels (NLs) which can be found in all packaged food products. However, sometimes it becomes really challenging to utilize all this information available in these NLs even for consumers who are health conscious as they might not be familiar with nutritional terms or find it difficult to integrate nutritional data collection into their daily activities due to lack of time, motivation, or training. So it is essential to automate this data collection and interpretation process by integrating Computer Vision based algorithms to extract nutritional information from NLs because it improves the user’s ability to engage in continuous nutritional data collection and analysis. To make nutritional data collection more manageable and enjoyable for the users, we present a Proactive NUTrition Management System (PNUTS). PNUTS seeks to shift current research and clinical practices in nutrition management toward persuasion, automated nutritional information processing, and context-sensitive nutrition decision support. PNUTS consists of two modules, firstly a barcode scanning module which runs on smart phones and is capable of vision-based localization of One Dimensional (1D) Universal Product Code (UPC) and International Article Number (EAN) barcodes with relaxed pitch, roll, and yaw camera alignment constraints. The algorithm localizes barcodes in images by computing Dominant Orientations of Gradients (DOGs) of image segments and grouping smaller segments with similar DOGs into larger connected components. Connected components that pass given morphological criteria are marked as potential barcodes. The algorithm is implemented in a distributed, cloud-based system. The system’s front end is a smartphone application that runs on Android smartphones with Android 4.2 or higher. The system’s back end is deployed on a five node Linux cluster where images are processed. The algorithm was evaluated on a corpus of 7,545 images extracted from 506 videos of bags, bottles, boxes, and cans in a supermarket. The DOG algorithm was coupled to our in-place scanner for 1D UPC and EAN barcodes. The scanner receives from the DOG algorithm the rectangular planar dimensions of a connected component and the component’s dominant gradient orientation angle referred to as the skew angle. The scanner draws several scan lines at that skew angle within the component to recognize the barcode in place without any rotations. The scanner coupled to the localizer was tested on the same corpus of 7,545 images. Laboratory experiments indicate that the system can localize and scan barcodes of any orientation in the yaw plane, of up to 73.28 degrees in the pitch plane, and of up to 55.5 degrees in the roll plane. The videos have been made public for all interested research communities to replicate our findings or to use them in their own research. The front end Android application is available for free download at Google Play under the title of NutriGlass. This module is also coupled to a comprehensive NL database from which nutritional information can be retrieved on demand. Currently our NL database consists of more than 230,000 products. The second module of PNUTS is an algorithm whose objective is to determine the text skew angle of an NL image without constraining the angle’s magnitude. The horizontal, vertical, and diagonal matrices of the (Two Dimensional) 2D Haar Wavelet Transform are used to identify 2D points with significant intensity changes. The set of points is bounded with a minimum area rectangle whose rotation angle is the text’s skew. The algorithm’s performance is compared with the performance of five text skew detection algorithms on 1001 U.S. nutrition label images and 2200 single- and multi-column document images in multiple languages. To ensure the reproducibility of the reported results, the source code of the algorithm and the image data have been made publicly available. If the skew angle is estimated correctly, optical character recognition (OCR) techniques can be used to extract nutrition information
    corecore