2,044 research outputs found

    An Evaluation of Deep Learning-Based Object Identification

    Get PDF
    Identification of instances of semantic objects of a particular class, which has been heavily incorporated in people's lives through applications like autonomous driving and security monitoring, is one of the most crucial and challenging areas of computer vision. Recent developments in deep learning networks for detection have improved object detector accuracy. To provide a detailed review of the current state of object detection pipelines, we begin by analyzing the methodologies employed by classical detection models and providing the benchmark datasets used in this study. After that, we'll have a look at the one- and two-stage detectors in detail, before concluding with a summary of several object detection approaches. In addition, we provide a list of both old and new apps. It's not just a single branch of object detection that is examined. Finally, we look at how to utilize various object detection algorithms to create a system that is both efficient and effective. and identify a number of emerging patterns in order to better understand the using the most recent algorithms and doing more study

    Automated Complexity-Sensitive Image Fusion

    Get PDF
    To construct a complete representation of a scene with environmental obstacles such as fog, smoke, darkness, or textural homogeneity, multisensor video streams captured in diferent modalities are considered. A computational method for automatically fusing multimodal image streams into a highly informative and unified stream is proposed. The method consists of the following steps: 1. Image registration is performed to align video frames in the visible band over time, adapting to the nonplanarity of the scene by automatically subdividing the image domain into regions approximating planar patches 2. Wavelet coefficients are computed for each of the input frames in each modality 3. Corresponding regions and points are compared using spatial and temporal information across various scales 4. Decision rules based on the results of multimodal image analysis are used to combine thewavelet coefficients from different modalities 5. The combined wavelet coefficients are inverted to produce an output frame containing useful information gathered from the available modalities Experiments show that the proposed system is capable of producing fused output containing the characteristics of color visible-spectrum imagery while adding information exclusive to infrared imagery, with attractive visual and informational properties

    Moving object detection for automobiles by the shared use of H.264/AVC motion vectors : innovation report.

    Get PDF
    Cost is one of the problems for wider adoption of Advanced Driver Assistance Systems (ADAS) in China. The objective of this research project is to develop a low-cost ADAS by the shared use of motion vectors (MVs) from a H.264/AVC video encoder that was originally designed for video recording only. There were few studies on the use of MVs from video encoders on a moving platform for moving object detection. The main contribution of this research is the novel algorithm proposed to address the problems of moving object detection when MVs from a H.264/AVC encoder are used. It is suitable for mass-produced in-vehicle devices as it combines with MV based moving object detection in order to reduce the cost and complexity of the system, and provides the recording function by default without extra cost. The estimated cost of the proposed system is 50% lower than that making use of the optical flow approach. To reduce the area of region of interest and to account for the real-time computation requirement, a new block based region growth algorithm is used for the road region detection. To account for the small amplitude and limited precision of H.264/AVC MVs on relatively slow moving objects, the detection task separates the region of interest into relatively fast and relatively slow speed regions by examining the amplitude of MVs, the position of focus of expansion and the result of road region detection. Relatively slow moving objects are detected and tracked by the use of generic horizontal and vertical contours of rear-view vehicles. This method has addressed the problem of H.264/AVC encoders that possess limited precision and erroneous motion vectors for relatively slow moving objects and regions near the focus of expansion. Relatively fast moving objects are detected by a two-stage approach. It includes a Hypothesis Generation (HG) and a Hypothesis Verification (HV) stage. This approach addresses the problem that the H.264/AVC MVs are generated for coding efficiency rather than for minimising motion error of objects. The HG stage will report a potential moving object based on clustering the planar parallax residuals satisfying the constraints set out in the algorithm. The HV will verify the existence of the moving object based on the temporal consistency of its displacement in successive frames. The test results show that the vehicle detection rate higher than 90% which is on a par to methods proposed by other authors, and the computation cost is low enough to achieve the real-time performance requirement. An invention patent, one international journal paper and two international conference papers have been either published or accepted, showing the originality of the work in this project. One international journal paper is also under preparation

    Virtual Reality Engine Development

    Get PDF
    With the advent of modern graphics and computing hardware and cheaper sensor and display technologies, virtual reality is becoming increasingly popular in the fields of gaming, therapy, training and visualization. Earlier attempts at popularizing VR technology were plagued by issues of cost, portability and marketability to the general public. Modern screen technologies make it possible to produce cheap, light head-mounted displays (HMDs) like the Oculus Rift, and modern GPUs make it possible to create and deliver a seamless real-time 3D experience to the user. 3D sensing has found an application in virtual and augmented reality as well, allowing for a higher level of interaction between the real and the simulated. There are still issues that persist, however. Many modern graphics/game engines still do not provide developers with an intuitive or adaptable interface to incorporate these new technologies. Those that do, tend to think of VR as a novelty afterthought, and even then only provide tailor-made extensions for specific hardware. The goal of this paper is to design and implement a functional, general-purpose VR engine using abstract interfaces for much of the hardware components involved to allow for easy extensibility for the developer

    Real-time simulator of collaborative and autonomous vehicles

    Get PDF
    Durant ces dernières décennies, l’apparition des systèmes d’aide à la conduite a essentiellement été favorisée par le développement des différentes technologies ainsi que par celui des outils mathématiques associés. Cela a profondément affecté les systèmes de transport et a donné naissance au domaine des systèmes de transport intelligents (STI). Nous assistons de nos jours au développement du marché des véhicules intelligents dotés de systèmes d’aide à la conduite et de moyens de communication inter-véhiculaire. Les véhicules et les infrastructures intelligents changeront le mode de conduite sur les routes. Ils pourront résoudre une grande partie des problèmes engendrés par le trafic routier comme les accidents, les embouteillages, la pollution, etc. Cependant, le bon fonctionnement et la fiabilité des nouvelles générations des systèmes de transport nécessitent une parfaite maitrise des différents processus de leur conception, en particulier en ce qui concerne les systèmes embarqués. Il est clair que l’identification et la correction des défauts des systèmes embarqués sont deux tâches primordiales à la fois pour la sauvegarde de la vie humaine, à la fois pour la préservation de l’intégrité des véhicules et des infrastructures urbaines. Pour ce faire, la simulation numérique en temps réel est la démarche la plus adéquate pour tester et valider les systèmes de conduite et les véhicules intelligents. Elle présente de nombreux avantages qui la rendent incontournable pour la conception des systèmes embarqués. Par conséquent, dans ce projet, nous présentons une nouvelle plateforme de simulation temps-réel des véhicules intelligents et autonomes en conduite collaborative. Le projet se base sur deux principaux composants. Le premier étant les produits d’OPAL-RT Technologies notamment le logiciel RT-LAB « en : Real Time LABoratory », l’application Orchestra et les machines de simulation dédiées à la simulation en temps réel et aux calculs parallèles, le second composant est Pro-SiVIC pour la simulation de la dynamique des véhicules, du comportement des capteurs embarqués et de l’infrastructure. Cette nouvelle plateforme (Pro-SiVIC/RT-LAB) permettra notamment de tester les systèmes embarqués (capteurs, actionneurs, algorithmes), ainsi que les moyens de communication inter-véhiculaire. Elle permettra aussi d’identifier et de corriger les problèmes et les erreurs logicielles, et enfin de valider les systèmes embarqués avant même le prototypage

    A systematic literature review on the relationship between autonomous vehicle technology and traffic-related mortality.

    Get PDF
    학위논문(석사) -- 서울대학교대학원 : 행정대학원 글로벌행정전공, 2023. 2. 최태현.The society is anticipated to gain a lot from Autonomous Vehicles (AV), such as improved traffic flow and a decrease in accidents. They heavily rely on improvements in various Artificial Intelligence (AI) processes and strategies. Though some researchers in this field believe AV is the key to enhancing safety, others believe AV creates new challenges when it comes to ensuring the security of these new technology/systems and applications. The article conducts a systematic literature review on the relationship between autonomous vehicle technology and traffic-related mortality. According to inclusion and exclusion criteria, articles from EBSCO, ProQuest, IEEE Explorer, Web of Science were chosen, and they were then sorted. The findings reveal that the most of these publications have been published in advanced transport-related journals. Future improvements in the automobile industry and the development of intelligent transportation systems could help reduce the number of fatal traffic accidents. Technologies for autonomous cars provide effective ways to enhance the driving experience and reduce the number of traffic accidents. A multitude of driving-related problems, such as crashes, traffic, energy usage, and environmental pollution, will be helped by autonomous driving technology. More research is needed for the significant majority of the studies that were assessed. They need to be expanded so that they can be tested in real-world or computer-simulated scenarios, in better and more realistic scenarios, with better and more data, and in experimental designs where the results of the proposed strategy are compared to those of industry standards and competing strategies. Therefore, additional study with improved methods is needed. Another major area that requires additional research is the moral and ethical choices made by AVs. Government, policy makers, manufacturers, and designers all need to do many actions in order to deploy autonomous vehicles on the road effectively. The government should develop laws, rules, and an action plan in particular. It is important to create more effective programs that might encourage the adoption of emerging technology in transportation systems, such as driverless vehicles. In this regard, user perception becomes essential since it may inform designers about current issues and observations made by people. The perceptions of autonomous car users in developing countries like Azerbaijan haven't been thoroughly studied up to this point. The manufacturer has to fix the system flaw and needs a good data set for efficient operation. In the not-too-distant future, the widespread use of highly automated vehicles (AVs) may open up intriguing new possibilities for resolving persistent issues in current safety-related research. Further research is required to better understand and quantify the significant policy implications of Avs, taking into consideration factors like penetration rate, public adoption, technological advancements, traffic patterns, and business models. It only needs to take into account peer-reviewed, full-text journal papers for the investigation, but it's clear that a larger database and more documents would provide more results and a more thorough analysis.자율주행차(AV)를 통해 교통 흐름이 개선되고 사고가 줄어드는 등 사회가 얻는 것이 많을 것으로 예상된다. 그들은 다양한 인공지능(AI) 프로세스와 전략의 개선에 크게 의존한다. 이 분야의 일부 연구자들은 AV가 안전성을 향상시키는 열쇠라고 믿지만, 다른 연구자들은 AV가 이러한 새로운 기술/시스템 및 애플리케이션의 보안을 보장하는 것과 관련하여 새로운 문제를 야기한다고 믿는다. 이 논문은 자율주행차 기술과 교통 관련 사망률 사이의 관계에 대한 체계적인 문헌 검토를 수행한다. 포함 및 제외 기준에 따라 EBSCO, ProQuest, IEEE Explorer 및 Web of Science의 기사를 선택하고 분류했다.연구 결과는 이러한 출판물의 대부분이 고급 운송 관련 저널에 게재되었음을 보여준다. 미래의 자동차 산업의 개선과 지능형 교통 시스템의 개발은 치명적인 교통 사고의 수를 줄이는 데 도움이 될 수 있다. 자율주행 자동차 기술은 운전 경험을 향상시키고 교통 사고의 수를 줄일 수 있는 효과적인 방법을 제공한다. 충돌, 교통, 에너지 사용, 환경 오염과 같은 수많은 운전 관련 문제들은 자율 주행 기술에 의해 도움을 받을 것이다. 평가된 대부분의 연구에 대해 더 많은 연구가 필요하다. 실제 또는 컴퓨터 시뮬레이션 시나리오, 더 좋고 현실적인 시나리오, 더 좋고 더 많은 데이터, 그리고 제안된 전략 결과가 산업 표준 및 경쟁 전략의 결과와 비교되는 실험 설계에서 테스트될 수 있도록 확장되어야 한다. 따라서 개선된 방법에 대한 추가 연구가 필요하다. 추가 연구가 필요한 또 다른 주요 분야는 AV의 도덕적, 윤리적 선택이다. 정부, 정책 입안자, 제조업체 및 설계자는 모두 자율 주행 차량을 효과적으로 도로에 배치하기 위해 많은 조치를 취해야 한다. 정부는 특히 법, 규칙, 실행 계획을 개발해야 한다. 운전자 없는 차량과 같은 운송 시스템에서 새로운 기술의 채택을 장려할 수 있는 보다 효과적인 프로그램을 만드는 것이 중요하다. 이와 관련하여, 설계자에게 현재 이슈와 사람에 의한 관찰을 알려줄 수 있기 때문에 사용자 인식이 필수적이 된다.제조업체는 시스템 결함을 수정해야 하며 효율적인 작동을 위해 좋은 데이터 세트가 필요하다. 멀지 않은 미래에, 고도로 자동화된 차량(AV)의 광범위한 사용은 현재의 안전 관련 연구에서 지속적인 문제를 해결하기 위한 흥미로운 새로운 가능성을 열어줄 수 있다. 보급률, 공공 채택, 기술 발전, 교통 패턴 및 비즈니스 모델과 같은 요소를 고려하여 Avs의 중요한 정책 영향을 더 잘 이해하고 정량화하기 위한 추가 연구가 필요하다. 조사를 위해 동료 검토를 거친 전문 저널 논문만 고려하면 되지만, 데이터베이스가 커지고 문서가 많아지면 더 많은 결과와 더 철저한 분석이 제공될 것이 분명하다.Abstract 3 Table of Contents 6 List of Tables 7 List of Figures 7 List of Appendix 7 CHAPTER 1: INTRODUCTION 8 1.1. Background 8 1.2. Purpose of Research 13 CHAPTER 2: AUTONOMOUS VEHICLES 21 2.1. Intelligent Traffic Systems 21 2.2. System Architecture for Autonomous Vehicles 22 2.3. Key components in AV classification 27 CHAPTER 3: METHODOLOGY AND DATA COLLECTION PROCEDURE 35 CHAPTER 4: FINDINGS AND DISCUSSION 39 4.1. RQ1: Do autonomous vehicles reduce traffic-related deaths 40 4.2. RQ2: Are there any challenges to using autonomous vehicles 63 4.3. RQ3: As a developing country, how effective is the use of autonomous vehicles for reducing traffic mortality 72 CHAPTER 5: CONCLUSION 76 5.1. Summary 76 5.2. Implications and Recommendations 80 5.3. Limitation of the study 91 Bibliography 93 List of Tables Table 1: The 6 Levels of Autonomous Vehicles Table 2: Search strings Table 3: Inclusion and exclusion criteria List of Figures Figure 1: Traffic Death Comparison with Europe Figure 2: Research strategy and study selection process List of Appendix Appendix 1: List of selected articles석

    Combined Learned and Classical Methods for Real-Time Visual Perception in Autonomous Driving

    Full text link
    Autonomy, robotics, and Artificial Intelligence (AI) are among the main defining themes of next-generation societies. Of the most important applications of said technologies is driving automation which spans from different Advanced Driver Assistance Systems (ADAS) to full self-driving vehicles. Driving automation is promising to reduce accidents, increase safety, and increase access to mobility for more people such as the elderly and the handicapped. However, one of the main challenges facing autonomous vehicles is robust perception which can enable safe interaction and decision making. With so many sensors to perceive the environment, each with its own capabilities and limitations, vision is by far one of the main sensing modalities. Cameras are cheap and can provide rich information of the observed scene. Therefore, this dissertation develops a set of visual perception algorithms with a focus on autonomous driving as the target application area. This dissertation starts by addressing the problem of real-time motion estimation of an agent using only the visual input from a camera attached to it, a problem known as visual odometry. The visual odometry algorithm can achieve low drift rates over long-traveled distances. This is made possible through the innovative local mapping approach used. This visual odometry algorithm was then combined with my multi-object detection and tracking system. The tracking system operates in a tracking-by-detection paradigm where an object detector based on convolution neural networks (CNNs) is used. Therefore, the combined system can detect and track other traffic participants both in image domain and in 3D world frame while simultaneously estimating vehicle motion. This is a necessary requirement for obstacle avoidance and safe navigation. Finally, the operational range of traditional monocular cameras was expanded with the capability to infer depth and thus replace stereo and RGB-D cameras. This is accomplished through a single-stream convolution neural network which can output both depth prediction and semantic segmentation. Semantic segmentation is the process of classifying each pixel in an image and is an important step toward scene understanding. Literature survey, algorithms descriptions, and comprehensive evaluations on real-world datasets are presented.Ph.D.College of Engineering & Computer ScienceUniversity of Michiganhttps://deepblue.lib.umich.edu/bitstream/2027.42/153989/1/Mohamed Aladem Final Dissertation.pdfDescription of Mohamed Aladem Final Dissertation.pdf : Dissertatio

    Leveraging a Publish/Subscribe Fog System to Provide Collision Warnings in Vehicular Networks

    Full text link
    [EN] Fog computing, an extension of the Cloud Computing paradigm where routers themselves may provide the virtualisation infrastructure, aims at achieving fluidity when distributing in-network functions, in addition to allowing fast and scalable processing, and exchange of information. In this paper we present a fog computing architecture based on a content island which interconnects sets of things to exchange and process data among themselves or with other content islands. We then present a use case that focuses on a smartphone-based forward collision warning application for a connected vehicle scenario. This application makes use of the optical sensor of smartphones to estimate the distance between the device itself and other vehicles in its field of view. The vehicle travelling directly ahead is identified relying on the information from the GPS, camera, and inter-island communication. Warnings are generated at both content islands, if the driver does not maintain a predefined safe distance towards the vehicle ahead. Experiments performed with the application show that with the developed method, we are able to estimate the distance between vehicles, and the inter-island communication has a very low overhead, resulting in improved performance. On comparing our proposed solution based on edge/fog computing with a cloud-based api, it was observed that our solution outperformed the cloud-based api, thus making us optimistic of the utility of the proposed architectureThis work was partially funding by the Ministerio de Ciencia, Innovación y Universidades, Programa Estatal de Investigación, Desarrollo e Innovación Orientada a los Retos de la Sociedad, Proyectos I+D+I 2018 , Spain, under Grant RTI2018-096384-B-I00Patra, S.; Manzoni, P.; Tavares De Araujo Cesariny Calafate, CM.; Zamora-Mero, WJ.; Cano, J. (2019). Leveraging a Publish/Subscribe Fog System to Provide Collision Warnings in Vehicular Networks. Sensors. 19(18):1-22. https://doi.org/10.3390/s19183852S1221918Vaquero, L. M., & Rodero-Merino, L. (2014). Finding your Way in the Fog. ACM SIGCOMM Computer Communication Review, 44(5), 27-32. doi:10.1145/2677046.2677052MQTT Version 3.1.1 http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.docSultana, T., & Wahid, K. A. (2019). Choice of Application Layer Protocols for Next Generation Video Surveillance Using Internet of Video Things. IEEE Access, 7, 41607-41624. doi:10.1109/access.2019.2907525Mehmood, F., Ullah, I., Ahmad, S., & Kim, D. (2019). Object detection mechanism based on deep learning algorithm using embedded IoT devices for smart home appliances control in CoT. Journal of Ambient Intelligence and Humanized Computing. doi:10.1007/s12652-019-01272-8https://tools.ietf.org/html/rfc2616https://tools.ietf.org/html/rfc7252Volvo Official Website https://www.volvocars.com/Chang, B. R., Tsai, H. F., & Young, C.-P. (2010). Intelligent data fusion system for predicting vehicle collision warning using vision/GPS sensing. Expert Systems with Applications, 37(3), 2439-2450. doi:10.1016/j.eswa.2009.07.036Tan, H.-S., & Huang, J. (2006). DGPS-Based Vehicle-to-Vehicle Cooperative Collision Warning: Engineering Feasibility Viewpoints. IEEE Transactions on Intelligent Transportation Systems, 7(4), 415-428. doi:10.1109/tits.2006.883938Gelernter, D. (1985). Generative communication in Linda. ACM Transactions on Programming Languages and Systems, 7(1), 80-112. doi:10.1145/2363.2433Raspberry Pi Official Website https://www.raspberrypi.org/https://tools.ietf.org/html/rfc768Wallace, G. K. (1991). The JPEG still picture compression standard. Communications of the ACM, 34(4), 30-44. doi:10.1145/103085.103089Sauvola, J., & Pietikäinen, M. (2000). Adaptive document image binarization. Pattern Recognition, 33(2), 225-236. doi:10.1016/s0031-3203(99)00055-2Road Safety Authority of Ireland Suggest the Use of Two Second Rule http://www.rotr.ie/Rules_of_the_road.pdfOpenALPR Cloud-API Website https://www.openalpr.com/cloud-api.htmlPatra, S., Calafate, C. T., Cano, J.-C., & Manzoni, P. (2015). An ITS solution providing real-time visual overtaking assistance using smartphones. 2015 IEEE 40th Conference on Local Computer Networks (LCN). doi:10.1109/lcn.2015.736632
    corecore