2,090 research outputs found

    Intelligent Traffic Monitoring Systems for Vehicle Classification: A Survey

    Full text link
    A traffic monitoring system is an integral part of Intelligent Transportation Systems (ITS). It is one of the critical transportation infrastructures that transportation agencies invest a huge amount of money to collect and analyze the traffic data to better utilize the roadway systems, improve the safety of transportation, and establish future transportation plans. With recent advances in MEMS, machine learning, and wireless communication technologies, numerous innovative traffic monitoring systems have been developed. In this article, we present a review of state-of-the-art traffic monitoring systems focusing on the major functionality--vehicle classification. We organize various vehicle classification systems, examine research issues and technical challenges, and discuss hardware/software design, deployment experience, and system performance of vehicle classification systems. Finally, we discuss a number of critical open problems and future research directions in an aim to provide valuable resources to academia, industry, and government agencies for selecting appropriate technologies for their traffic monitoring applications.Comment: Published in IEEE Acces

    The Eye: A Light Weight Mobile Application for Visually Challenged People Using Improved YOLOv5l Algorithm

    Get PDF
    The eye is an essential sensory organ that allows us to perceive our surroundings at a glance. Losing this sense can result in numerous challenges in daily life. However, society is designed for the majority, which can create even more difficulties for visually impaired individuals. Therefore, empowering them and promoting self-reliance are crucial. To address this need, we propose a new Android application called โ€œThe Eyeโ€ that utilizes Machine Learning (ML)-based object detection techniques to recognize objects in real-time using a smartphone camera or a camera attached to a stick. The article proposed an improved YOLOv5l algorithm to improve object detection in visual applications. YOLOv5l has a larger model size and captures more complex features and details, leading to enhanced object detection accuracy compared to smaller variants like YOLOv5s and YOLOv5m. The primary enhancement in the improved YOLOv5l algorithm is integrating L1 and L2 regularization techniques. These techniques prevent overfitting and improve generalization by adding a regularization term to the loss function during training. Our approach combines image processing and text-to-speech conversion modules to produce reliable results. The Android text-to-speech module is then used to convert the object recognition results into an audio output. According to the experimental results, the improved YOLOv5l has higher detection accuracy than the original YOLOv5 and can detect small, multiple, and overlapped targets with higher accuracy. This study contributes to the advancement of technology to help visually impaired individuals become more self-sufficient and confident.ย Doi: 10.28991/ESJ-2023-07-05-011 Full Text: PD

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.ใ€€ This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    ์‹ค์‹œ๊ฐ„ ์ž์œจ์ฃผํ–‰ ์ธ์ง€ ์‹œ์Šคํ…œ์„ ์œ„ํ•œ ์‹ ๊ฒฝ ๋„คํŠธ์›Œํฌ์™€ ๊ตฐ์ง‘ํ™” ๊ธฐ๋ฐ˜ ๋ฏธํ•™์Šต ๋ฌผ์ฒด ๊ฐ์ง€๊ธฐ ํ†ตํ•ฉ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2020. 8. ์ด๊ฒฝ์ˆ˜.์ตœ๊ทผ ๋ช‡ ๋…„๊ฐ„, ์„ผ์„œ ๊ธฐ์ˆ ์˜ ๋ฐœ์ „๊ณผ ์ปดํ“จํ„ฐ ๊ณตํ•™ ๋ถ„์•ผ์˜ ์„ฑ๊ณผ๋“ค๋กœ ์ธํ•˜์—ฌ ์ž์œจ์ฃผํ–‰ ์—ฐ๊ตฌ๊ฐ€ ๋”์šฑ ํ™œ๋ฐœํ•ด์ง€๊ณ  ์žˆ๋‹ค. ์ž์œจ์ฃผํ–‰ ์‹œ์Šคํ…œ์— ์žˆ์–ด์„œ ์ฐจ๋Ÿ‰ ์ฃผ๋ณ€ ํ™˜๊ฒฝ์„ ์ธ์‹ํ•˜๋Š” ๊ฒƒ์€ ์•ˆ์ „ ๋ฐ ์‹ ๋ขฐ์„ฑ ์žˆ๋Š” ์ฃผํ–‰์„ ํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์ด๋‹ค. ์ž์œจ์ฃผํ–‰ ์‹œ์Šคํ…œ์€ ํฌ๊ฒŒ ์ธ์ง€, ํŒ๋‹จ, ์ œ์–ด๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”๋ฐ, ์ธ์ง€ ๋ชจ๋“ˆ์€ ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์ด ๊ฒฝ๋กœ๋ฅผ ์„ค์ •ํ•˜๊ณ  ํŒ๋‹จ, ์ œ์–ด๋ฅผ ํ•จ์— ์•ž์„œ ์ฃผ๋ณ€ ๋ฌผ์ฒด์˜ ์œ„์น˜์™€ ์›€์ง์ž„์„ ํŒŒ์•…ํ•ด์•ผํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ค‘์š”ํ•œ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ์ž์œจ์ฃผํ–‰ ์ธ์ง€ ๋ชจ๋“ˆ์€ ์ฃผํ–‰ ํ™˜๊ฒฝ์„ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ์„ผ์„œ๊ฐ€ ์‚ฌ์šฉ๋œ๋‹ค. ๊ทธ ์ค‘์—์„œ๋„ LiDAR์€ ํ˜„์žฌ ๋งŽ์€ ์ž์œจ์ฃผํ–‰ ์—ฐ๊ตฌ์—์„œ ๊ฐ€์žฅ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๋Š” ์„ผ์„œ ์ค‘ ํ•˜๋‚˜๋กœ, ๋ฌผ์ฒด์˜ ๊ฑฐ๋ฆฌ ์ •๋ณด ํš๋“์— ์žˆ์–ด์„œ ๋งค์šฐ ์œ ์šฉํ•˜๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” LiDAR์—์„œ ์ƒ์„ฑ๋˜๋Š” ํฌ์ธํŠธ ํด๋ผ์šฐ๋“œ raw ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์žฅ์• ๋ฌผ์˜ 3D ์ •๋ณด๋ฅผ ํŒŒ์•…ํ•˜๊ณ  ์ด๋“ค์„ ์ถ”์ ํ•˜๋Š” ์ธ์ง€ ๋ชจ๋“ˆ์„ ์ œ์•ˆํ•œ๋‹ค. ์ธ์ง€ ๋ชจ๋“ˆ์˜ ์ „์ฒด ํ”„๋ ˆ์ž„์›Œํฌ๋Š” ํฌ๊ฒŒ ์„ธ ๋‹จ๊ณ„๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. 1๋‹จ๊ณ„๋Š” ๋น„์ง€๋ฉด ํฌ์ธํŠธ ์ถ”์ •์„ ์œ„ํ•œ ๋งˆ์Šคํฌ ์ƒ์„ฑ, 2๋‹จ๊ณ„๋Š” ํŠน์ง• ์ถ”์ถœ ๋ฐ ์žฅ์• ๋ฌผ ๊ฐ์ง€, 3๋‹จ๊ณ„๋Š” ์žฅ์• ๋ฌผ ์ถ”์ ์œผ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ํ˜„์žฌ ๋Œ€๋ถ€๋ถ„์˜ ์‹ ๊ฒฝ๋ง ๊ธฐ๋ฐ˜์˜ ๋ฌผ์ฒด ํƒ์ง€๊ธฐ๋Š” ์ง€๋„ํ•™์Šต์„ ํ†ตํ•ด ํ•™์Šต๋œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ง€๋„ํ•™์Šต ๊ธฐ๋ฐ˜ ์žฅ์• ๋ฌผ ํƒ์ง€๊ธฐ๋Š” ํ•™์Šตํ•œ ์žฅ์• ๋ฌผ์„ ์ฐพ๋Š”๋‹ค๋Š” ๋ฐฉ๋ฒ•๋ก ์  ํ•œ๊ณ„๋ฅผ ์ง€๋‹ˆ๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‹ค์ œ ์ฃผํ–‰์ƒํ™ฉ์—์„œ๋Š” ๋ฏธ์ฒ˜ ํ•™์Šตํ•˜์ง€ ๋ชปํ•œ ๋ฌผ์ฒด๋ฅผ ๋งˆ์ฃผํ•˜๊ฑฐ๋‚˜ ์‹ฌ์ง€์–ด ํ•™์Šตํ•œ ๋ฌผ์ฒด๋„ ๋†“์น  ์ˆ˜ ์žˆ๋‹ค. ์ธ์ง€ ๋ชจ๋“ˆ์˜ 1๋‹จ๊ณ„์—์„œ ์ด๋Ÿฌํ•œ ์ง€๋„ํ•™์Šต์˜ ๋ฐฉ๋ฒ•๋ก ์  ํ•œ๊ณ„์— ๋Œ€์ฒ˜ํ•˜๊ธฐ ์œ„ํ•ด ํฌ์ธํŠธ ํด๋ผ์šฐ๋“œ๋ฅผ ์ผ์ •ํ•œ ๊ฐ„๊ฒฉ์œผ๋กœ ๊ตฌ์„ฑ๋œ 3D ๋ณต์…€(voxel)๋กœ ๋ถ„ํ• ํ•˜๊ณ , ์ด๋กœ๋ถ€ํ„ฐ ๋น„์ ‘์ง€์ ๋“ค์„ ์ถ”์ถœํ•œ ๋’ค ๋ฏธ์ง€์˜ ๋ฌผ์ฒด(Unknown object)๋ฅผ ํƒ์ง€ํ•œ๋‹ค. 2๋‹จ๊ณ„์—์„œ๋Š” ๊ฐ ๋ณต์…€์˜ ํŠน์„ฑ์„ ์ถ”์ถœ ๋ฐ ํ•™์Šตํ•˜๊ณ  ๋„คํŠธ์›Œํฌ๋ฅผ ํ•™์Šต์‹œํ‚ด์œผ๋กœ์จ ๊ฐ์ฒด ๊ฐ์ง€๊ธฐ๋ฅผ ๊ตฌ์„ฑํ•œ๋‹ค. ๋งˆ์ง€๋ง‰ 3๋‹จ๊ณ„์—์„œ๋Š” ์นผ๋งŒ ํ•„ํ„ฐ์™€ ํ—๊ฐ€๋ฆฌ์•ˆ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ™œ์šฉํ•œ ๋‹ค์ค‘ ๊ฐ์ฒด ํƒ์ง€๊ธฐ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ด๋ ‡๊ฒŒ ๊ตฌ์„ฑ๋œ ์ธ์ง€ ๋ชจ๋“ˆ์€ ๋น„์ง€๋ฉด ์ ๋“ค์„ ์ถ”์ถœํ•˜์—ฌ ํ•™์Šตํ•˜์ง€ ์•Š์€ ๋ฌผ์ฒด์— ๋Œ€ํ•ด์„œ๋„ ๋ฏธ์ง€์˜ ๋ฌผ์ฒด(Unknown object)๋กœ ๊ฐ์ง€ํ•˜์—ฌ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์žฅ์• ๋ฌผ ํƒ์ง€๊ธฐ๋ฅผ ๋ณด์™„ํ•œ๋‹ค. ์ตœ๊ทผ ๋ผ์ด๋‹ค๋ฅผ ํ™œ์šฉํ•œ ์ž์œจ์ฃผํ–‰ ์šฉ ๊ฐ์ฒด ํƒ์ง€๊ธฐ์— ๋Œ€ํ•œ ์—ฐ๊ตฌ๊ฐ€ ํ™œ๋ฐœํžˆ ์ง„ํ–‰๋˜๊ณ  ์žˆ์œผ๋‚˜ ๋Œ€๋ถ€๋ถ„์˜ ์—ฐ๊ตฌ๋“ค์€ ๋‹จ์ผ ํ”„๋ ˆ์ž„์˜ ๋ฌผ์ฒด ์ธ์‹์— ๋Œ€ํ•ด ์ง‘์ค‘ํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ์˜ฌ๋ฆฌ๋Š” ๋ฐ ์ง‘์ค‘ํ•˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฌํ•œ ์—ฐ๊ตฌ๋Š” ๊ฐ์ง€ ์ค‘์š”๋„์™€ ํ”„๋ ˆ์ž„ ๊ฐ„์˜ ๊ฐ์ง€ ์—ฐ์†์„ฑ ๋“ฑ์— ๋Œ€ํ•œ ๊ณ ๋ ค๊ฐ€ ๋˜์–ด์žˆ์ง€ ์•Š๋‹ค๋Š” ํ•œ๊ณ„์ ์ด ์กด์žฌํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์‹ค์‹œ๊ฐ„ ์„ฑ๋Šฅ์„ ์–ป๊ธฐ ์œ„ํ•ด ์ด๋Ÿฌํ•œ ๋ถ€๋ถ„์„ ๊ณ ๋ คํ•œ ์„ฑ๋Šฅ ์ง€์ˆ˜๋ฅผ ์ œ์•ˆํ•˜๊ณ , ์‹ค์ฐจ ์‹คํ—˜์„ ํ†ตํ•ด ์ œ์•ˆํ•œ ์ธ์ง€ ๋ชจ๋“ˆ์„ ํ…Œ์ŠคํŠธ, ์ œ์•ˆํ•œ ์„ฑ๋Šฅ ์ง€์ˆ˜๋ฅผ ํ†ตํ•ด ํ‰๊ฐ€ํ•˜์˜€๋‹ค.In recent few years, the interest in automotive researches on autonomous driving system has been grown up due to advances in sensing technologies and computer science. In the development of autonomous driving system, knowledge about the subject vehicles surroundings is the most essential function for safe and reliable driving. When it comes to making decisions and planning driving scenarios, to know the location and movements of surrounding objects and to distinguish whether an object is a car or pedestrian give valuable information to the autonomous driving system. In the autonomous driving system, various sensors are used to understand the surrounding environment. Since LiDAR gives the distance information of surround objects, it has been the one of the most commonly used sensors in the development of perception system. Despite achievement of the deep neural network research field, its application and research trends on 3D object detection using LiDAR point cloud tend to pursue higher accuracy without considering a practical application. A deep neural-network-based perception module heavily depends on the training dataset, but it is impossible to cover all the possibilities and corner cases. To apply the perception module in actual driving, it needs to detect unknown objects and unlearned objects, which may face on the road. To cope with these problems, in this dissertation, a perception module using LiDAR point cloud is proposed, and its performance is validated via real vehicle test. The whole framework is composed of three stages : stage-1 for the ground estimation playing as a mask for point filtering which are considered as non-ground and stage-2 for feature extraction and object detection, and stage-3 for object tracking. In the first stage, to cope with the methodological limit of supervised learning that only finds learned object, we divide a point cloud into equally spaced 3D voxels the point cloud and extract non-ground points and cluster the points to detect unknown objects. In the second stage, the voxelization is utilized to learn the characteristics of point clouds organized in vertical columns. The trained network can distinguish the object through the extracted features from point clouds. In non-maximum suppression process, we sort the predictions according to IoU between prediction and polygon to select a prediction close to the actual heading angle of the object. The last stage presents a 3D multiple object tracking solution. Through Kalman filter, the learned and unlearned objects next movement is predicted and this prediction updated by measurement detection. Through this process, the proposed object detector complements the detector based on supervised learning by detecting the unlearned object as an unknown object through non-ground point extraction. Recent researches on object detection for autonomous driving have been actively conducted, but recent works tend to focus more on the recognition of the objects at every single frame and developing accurate system. To obtain a real-time performance, this paper focuses on more practical aspects by propose a performance index considering detection priority and detection continuity. The performance of the proposed algorithm has been investigated via real-time vehicle test.Chapter 1 Introduction 1 1.1. Background and Motivation 1 1.2. Overview and Previous Researches 4 1.3. Thesis Objectives 12 1.4. Thesis Outline 14 Chapter 2 Overview of a Perception in Automated Driving 15 Chapter 3 Object Detector 18 3.1. Voxelization & Feature Extraction 22 3.2. Backbone Network 25 3.3. Detection Head & Loss Function Design 28 3.4. Loss Function Design 30 3.5. Data Augmentation 33 3.6. Post Process 39 Chapter 4 Non-Ground Point Clustering 42 4.1. Previous Researches for Ground Removal 44 4.2. Non-Ground Estimation using Voxelization 45 4.3. Non-ground Object Segmentation 50 4.3.1. Object Clustering 52 4.3.2. Bounding Polygon 55 Chapter 5 . Object Tracking 57 5.1. State Prediction and Update 58 5.2. Data Matching Association 60 Chapter 6 Test result for KITTI dataset 62 6.1. Quantitative Analysis 62 6.2. Qualitative Analysis 72 6.3. Additional Training 76 6.3.1. Additional data acquisition 78 6.3.2. Qualitative Analysis 81 Chapter 7 Performance Evaluation 85 7.1. Current Evaluation Metrics 85 7.2. Limitations of Evaluation Metrics 87 7.2.1. Detection Continuity 87 7.2.2. Detection Priority 89 7.3. Criteria for Performance Index 91 Chapter 8 Vehicle Tests based Performance Evaluation 95 8.1. Configuration of Vehicle Tests 95 8.2. Qualitative Analysis 100 8.3. Quantitative Analysis 105 Chapter 9 Conclusions and Future Works 107 Bibliography 109 ๊ตญ๋ฌธ ์ดˆ๋ก 114Docto

    Positioning Commuters And Shoppers Through Sensing And Correlation

    Get PDF
    Positioning is a basic and important need in many scenarios of human daily activities. With position information, multifarious services could be vitalized to benefit all kinds of users, from individuals to organizations. Through positioning, people are able to obtain not only geo-location but also time related information. By aggregating position information from individuals, organizations could derive statistical knowledge about group behaviors, such as traffic, business, event, etc. Although enormous effort has been invested in positioning related academic and industrial work, there are still many holes to be filled. This dissertation proposes solutions to address the need of positioning in peopleโ€™s daily life from two aspects: transportation and shopping. All the solutions are smart-device-based (e.g. smartphone, smartwatch), which could potentially benefit most users considering the prevalence of smart devices. In positioning relevant activities, the components and their movement information could be sensed by different entities from diverse perspectives. The mechanisms presented in this dissertation treat the information collected from one perspective as reference and match it against the data collected from other perspectives to acquire absolute or relative position, in spatial as well as temporal dimension. For transportation, both driver and passenger oriented solutions are proposed. To help drivers improve safety and ease the tension from driving, two correlated systems, OmniView [1] and DriverTalk [2], are provided. These systems infer the relative positions of the vehicles moving together by matching the appearance images of the vehicles seen by each other, which help drivers maintain safe distance from surrounding vehicles and also give them opportunities to precisely convey driving related messages to targeted peer drivers. To improve bus-riding experience for passengers of public transit systems, a system named RideSense [3] is developed. This system correlates the sensor traces collected by both passengersโ€™ smart devices and reference devices in buses to position passengersโ€™ bus-riding, spatially and temporally. With this system, passengers could be billed without any explicit interaction with conventional ticketing facilities in bus system, which makes the transportation system more efficient. For shopping activities, AutoLabel [4, 5] comes into play, which could position customers with regard to stores. AutoLabel constructs a mapping between WiFi vectors and semantic names of stores through correlating the text decorated inside stores with those on storesโ€™ websites. Later, through WiFi scanning and a lookup in the mapping, customersโ€™ smart devices could automatically recognize the semantic names of the stores they are in or nearby. Therefore, AutoLabel-enabled smart device serves as a bridge for the information flow between business owners and customers, which could benefit both sides

    An intelligent multi-floor mobile robot transportation system in life science laboratories

    Get PDF
    In this dissertation, a new intelligent multi-floor transportation system based on mobile robot is presented to connect the distributed laboratories in multi-floor environment. In the system, new indoor mapping and localization are presented, hybrid path planning is proposed, and an automated doors management system is presented. In addition, a hybrid strategy with innovative floor estimation to handle the elevator operations is implemented. Finally the presented system controls the working processes of the related sub-system. The experiments prove the efficiency of the presented system

    HPM-Frame: A Decision Framework for Executing Software on Heterogeneous Platforms

    Full text link
    Heterogeneous computing is one of the most important computational solutions to meet rapidly increasing demands on system performance. It typically allows the main flow of applications to be executed on a CPU while the most computationally intensive tasks are assigned to one or more accelerators, such as GPUs and FPGAs. The refactoring of systems for execution on such platforms is highly desired but also difficult to perform, mainly due the inherent increase in software complexity. After exploration, we have identified a current need for a systematic approach that supports engineers in the refactoring process -- from CPU-centric applications to software that is executed on heterogeneous platforms. In this paper, we introduce a decision framework that assists engineers in the task of refactoring software to incorporate heterogeneous platforms. It covers the software engineering lifecycle through five steps, consisting of questions to be answered in order to successfully address aspects that are relevant for the refactoring procedure. We evaluate the feasibility of the framework in two ways. First, we capture the practitioner's impressions, concerns and suggestions through a questionnaire. Then, we conduct a case study showing the step-by-step application of the framework using a computer vision application in the automotive domain.Comment: Manuscript submitted to the Journal of Systems and Softwar
    • โ€ฆ
    corecore