315,647 research outputs found

    Boosting Economic Growth Through Advanced Machine Vision

    Get PDF
    International audienceIn this chapter, we overview the potential of machine vision and related technologies in various application domains of critical importance for economic growth and prospect. Considered domains include healthcare, energy and environment, finance, and industrial innovation. Visibility technologies considered encompass augmented and virtual reality, 3D technologies, and media content authoring tools and technologies. We overview the main challenges facing the application domains and discuss the potential of machine vision technologies to address these challenges. In healthcare, rising cases for chronic diseases among patients and the urgent need for preventive healthcare is accelerating the deployment of telemedicine. Telemedicine as defined in the EU commission staff working paper on “Telemedicine for the benefit of patients, healthcare systems and society” (COM-SEC, 2009) is the delivery of healthcare services at a distance using information and communication technologies. There are two main groups of telemedicine applications: (1) applications linking a patient with a health professional; and (2) applications linking two health professionals (such as tele-second opinion, teleradiology). Machine vision technologies, coupled with reliable networking infrastructure, are key for accelerating the penetration of telemedicine applications. Several examples will be drawn illustrating the use of machine vision technologies in telemedicine. Sustainable energy and environment are key pillars for a sustainable economy. Technology is playing an increasing vital role in energy and environment including water resources management. This would foster greater control of the demand and supply side of energy and water. On the demand side, technologies including machine vision, could help indeveloping advanced visual metering technologies. On the supply side, machine vision technologies could help in exploring alternative sources for the generation of energy and water supply. In the finance domain, financial crises and the failure of banking systems are major challenges facing the coming decade. Recovery is still far from reach entailing a major economic slowdown. Machine vision technologies offer the potential for greater risk visibility, prediction of downturns and stress test of the soundness of the financial system. Examples are drawn from 3D/AR/VR applications in finance. Innovation could be seen as the process of deploying breakthrough outcome of research in industry. The innovation process could be conceived as a feedback loop starting from channelling the outcome of basic research into industrial production. Marketing strategies and novel approaches for customer relationship management draw a feedback loop that continuously update the feed of breakthrough research in industrial production. In this respect, machine vision technologies are key along this feedback process, particularly in the visualisation of the potential market and the potential route to market. CYBER II technology (Hasenfratz et al, 2003 and 2004) is described in section 6 as a machine vision technology that has a potential use in the various application domains considered in this chapter. CYBER II technology is based on multi-camera image acquisition, from different view points, of real moving bodies. Section 6 describes CYBER II technology and its potential application in the considered domains. The chapter concludes with a comparative analysis of the penetration of machine vision in various application domains and reflects on the horizon of machine vision in boosting economic growth

    State-of-the-Art Sensors Technology in Spain 2015: Volume 1

    Get PDF
    This book provides a comprehensive overview of state-of-the-art sensors technology in specific leading areas. Industrial researchers, engineers and professionals can find information on the most advanced technologies and developments, together with data processing. Further research covers specific devices and technologies that capture and distribute data to be processed by applying dedicated techniques or procedures, which is where sensors play the most important role. The book provides insights and solutions for different problems covering a broad spectrum of possibilities, thanks to a set of applications and solutions based on sensory technologies. Topics include: • Signal analysis for spectral power • 3D precise measurements • Electromagnetic propagation • Drugs detection • e-health environments based on social sensor networks • Robots in wireless environments, navigation, teleoperation, object grasping, demining • Wireless sensor networks • Industrial IoT • Insights in smart cities • Voice recognition • FPGA interfaces • Flight mill device for measurements on insects • Optical systems: UV, LEDs, lasers, fiber optics • Machine vision • Power dissipation • Liquid level in fuel tanks • Parabolic solar tracker • Force sensors • Control for a twin roto

    Indoor place classification for intelligent mobile systems

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Place classification is an emerging theme in the study of human-robot interaction which requires common understanding of human-defined concepts between the humans and machines. The requirement posts a significant challenge to the current intelligent mobile systems which are more likely to be operating in absolute coordinate systems, and hence unaware of the semantic labels. Aimed at filling this gap, the objective of the research is to develop an approach for intelligent mobile systems to understand and label the indoor environments in a holistic way based on the sensory observations. Focusing on commonly available sensors and machine learning based solutions which play a significant role in the research of place classification, solutions to train a machine to assign unknown instances with concepts understandable to human beings, like room, office and corridor, in both independent and structured prediction ways, have been proposed in this research. The solution modelling dependencies between random variables, which takes the spatial relationship between observations into consideration, is further extended by integrating the logical coexistence of the objects and the places to provide the machine with the additional object detection ability. The main techniques involve logistic regression, support vector machine, and conditional random field, in both supervised and semi-supervised learning frameworks. Experiments in a variety of environments show convincing place classification results through machine learning based approaches on data collected with either single or multiple sensory modalities; modelling spatial dependencies and introducing semi-supervised learning paradigm further improve the accuracy of the prediction and the generalisation ability of the system; and vision-based object detection can be seamlessly integrated into the learning framework to enhance the discrimination ability and the flexibility of the system. The contributions of this research lie in the in-depth studies on the place classification solutions with independent predictions, the improvements on the generalisation ability of the system through semi-supervised learning paradigm, the formulation of training a conditional random field with partially labelled data, and the integration of multiple cues in two sensory modalities to improve the system's functionality. It is anticipated that the findings of this research will significantly enhance the current capabilities of the human robot interaction and robot-environment interaction

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Computational Contributions to the Automation of Agriculture

    Get PDF
    The purpose of this paper is to explore ways that computational advancements have enabled the complete automation of agriculture from start to finish. With a major need for agricultural advancements because of food and water shortages, some farmers have begun creating their own solutions to these problems. Primarily explored in this paper, however, are current research topics in the automation of agriculture. Digital agriculture is surveyed, focusing on ways that data collection can be beneficial. Additionally, self-driving technology is explored with emphasis on farming applications. Machine vision technology is also detailed, with specific application to weed management and harvesting of crops. Finally, the effects of automating agriculture are briefly considered, including labor, the environment, and direct effects on farmers

    Multimedia information technology and the annotation of video

    Get PDF
    The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning

    Arguing Machines: Human Supervision of Black Box AI Systems That Make Life-Critical Decisions

    Full text link
    We consider the paradigm of a black box AI system that makes life-critical decisions. We propose an "arguing machines" framework that pairs the primary AI system with a secondary one that is independently trained to perform the same task. We show that disagreement between the two systems, without any knowledge of underlying system design or operation, is sufficient to arbitrarily improve the accuracy of the overall decision pipeline given human supervision over disagreements. We demonstrate this system in two applications: (1) an illustrative example of image classification and (2) on large-scale real-world semi-autonomous driving data. For the first application, we apply this framework to image classification achieving a reduction from 8.0% to 2.8% top-5 error on ImageNet. For the second application, we apply this framework to Tesla Autopilot and demonstrate the ability to predict 90.4% of system disengagements that were labeled by human annotators as challenging and needing human supervision
    • …
    corecore