2,090 research outputs found
Intelligent Traffic Monitoring Systems for Vehicle Classification: A Survey
A traffic monitoring system is an integral part of Intelligent Transportation
Systems (ITS). It is one of the critical transportation infrastructures that
transportation agencies invest a huge amount of money to collect and analyze
the traffic data to better utilize the roadway systems, improve the safety of
transportation, and establish future transportation plans. With recent advances
in MEMS, machine learning, and wireless communication technologies, numerous
innovative traffic monitoring systems have been developed. In this article, we
present a review of state-of-the-art traffic monitoring systems focusing on the
major functionality--vehicle classification. We organize various vehicle
classification systems, examine research issues and technical challenges, and
discuss hardware/software design, deployment experience, and system performance
of vehicle classification systems. Finally, we discuss a number of critical
open problems and future research directions in an aim to provide valuable
resources to academia, industry, and government agencies for selecting
appropriate technologies for their traffic monitoring applications.Comment: Published in IEEE Acces
The Eye: A Light Weight Mobile Application for Visually Challenged People Using Improved YOLOv5l Algorithm
The eye is an essential sensory organ that allows us to perceive our surroundings at a glance. Losing this sense can result in numerous challenges in daily life. However, society is designed for the majority, which can create even more difficulties for visually impaired individuals. Therefore, empowering them and promoting self-reliance are crucial. To address this need, we propose a new Android application called โThe Eyeโ that utilizes Machine Learning (ML)-based object detection techniques to recognize objects in real-time using a smartphone camera or a camera attached to a stick. The article proposed an improved YOLOv5l algorithm to improve object detection in visual applications. YOLOv5l has a larger model size and captures more complex features and details, leading to enhanced object detection accuracy compared to smaller variants like YOLOv5s and YOLOv5m. The primary enhancement in the improved YOLOv5l algorithm is integrating L1 and L2 regularization techniques. These techniques prevent overfitting and improve generalization by adding a regularization term to the loss function during training. Our approach combines image processing and text-to-speech conversion modules to produce reliable results. The Android text-to-speech module is then used to convert the object recognition results into an audio output. According to the experimental results, the improved YOLOv5l has higher detection accuracy than the original YOLOv5 and can detect small, multiple, and overlapped targets with higher accuracy. This study contributes to the advancement of technology to help visually impaired individuals become more self-sufficient and confident.ย Doi: 10.28991/ESJ-2023-07-05-011 Full Text: PD
Intelligent Circuits and Systems
ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.ใ This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering
์ค์๊ฐ ์์จ์ฃผํ ์ธ์ง ์์คํ ์ ์ํ ์ ๊ฒฝ ๋คํธ์ํฌ์ ๊ตฐ์งํ ๊ธฐ๋ฐ ๋ฏธํ์ต ๋ฌผ์ฒด ๊ฐ์ง๊ธฐ ํตํฉ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ๊ธฐ๊ณํญ๊ณต๊ณตํ๋ถ, 2020. 8. ์ด๊ฒฝ์.์ต๊ทผ ๋ช ๋
๊ฐ, ์ผ์ ๊ธฐ์ ์ ๋ฐ์ ๊ณผ ์ปดํจํฐ ๊ณตํ ๋ถ์ผ์ ์ฑ๊ณผ๋ค๋ก ์ธํ์ฌ ์์จ์ฃผํ ์ฐ๊ตฌ๊ฐ ๋์ฑ ํ๋ฐํด์ง๊ณ ์๋ค. ์์จ์ฃผํ ์์คํ
์ ์์ด์ ์ฐจ๋ ์ฃผ๋ณ ํ๊ฒฝ์ ์ธ์ํ๋ ๊ฒ์ ์์ ๋ฐ ์ ๋ขฐ์ฑ ์๋ ์ฃผํ์ ํ๊ธฐ ์ํด ํ์ํ ๊ฐ์ฅ ์ค์ํ ๊ธฐ๋ฅ์ด๋ค. ์์จ์ฃผํ ์์คํ
์ ํฌ๊ฒ ์ธ์ง, ํ๋จ, ์ ์ด๋ก ๊ตฌ์ฑ๋์ด ์๋๋ฐ, ์ธ์ง ๋ชจ๋์ ์์จ์ฃผํ ์ฐจ๋์ด ๊ฒฝ๋ก๋ฅผ ์ค์ ํ๊ณ ํ๋จ, ์ ์ด๋ฅผ ํจ์ ์์ ์ฃผ๋ณ ๋ฌผ์ฒด์ ์์น์ ์์ง์์ ํ์
ํด์ผํ๊ธฐ ๋๋ฌธ์ ์ค์ํ ์ ๋ณด๋ฅผ ์ ๊ณตํ๋ค.
์์จ์ฃผํ ์ธ์ง ๋ชจ๋์ ์ฃผํ ํ๊ฒฝ์ ํ์
ํ๊ธฐ ์ํด ๋ค์ํ ์ผ์๊ฐ ์ฌ์ฉ๋๋ค. ๊ทธ ์ค์์๋ LiDAR์ ํ์ฌ ๋ง์ ์์จ์ฃผํ ์ฐ๊ตฌ์์ ๊ฐ์ฅ ๋๋ฆฌ ์ฌ์ฉ๋๋ ์ผ์ ์ค ํ๋๋ก, ๋ฌผ์ฒด์ ๊ฑฐ๋ฆฌ ์ ๋ณด ํ๋์ ์์ด์ ๋งค์ฐ ์ ์ฉํ๋ค. ๋ณธ ๋
ผ๋ฌธ์์๋ LiDAR์์ ์์ฑ๋๋ ํฌ์ธํธ ํด๋ผ์ฐ๋ raw ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ฅ์ ๋ฌผ์ 3D ์ ๋ณด๋ฅผ ํ์
ํ๊ณ ์ด๋ค์ ์ถ์ ํ๋ ์ธ์ง ๋ชจ๋์ ์ ์ํ๋ค. ์ธ์ง ๋ชจ๋์ ์ ์ฒด ํ๋ ์์ํฌ๋ ํฌ๊ฒ ์ธ ๋จ๊ณ๋ก ๊ตฌ์ฑ๋๋ค. 1๋จ๊ณ๋ ๋น์ง๋ฉด ํฌ์ธํธ ์ถ์ ์ ์ํ ๋ง์คํฌ ์์ฑ, 2๋จ๊ณ๋ ํน์ง ์ถ์ถ ๋ฐ ์ฅ์ ๋ฌผ ๊ฐ์ง, 3๋จ๊ณ๋ ์ฅ์ ๋ฌผ ์ถ์ ์ผ๋ก ๊ตฌ์ฑ๋๋ค. ํ์ฌ ๋๋ถ๋ถ์ ์ ๊ฒฝ๋ง ๊ธฐ๋ฐ์ ๋ฌผ์ฒด ํ์ง๊ธฐ๋ ์ง๋ํ์ต์ ํตํด ํ์ต๋๋ค. ๊ทธ๋ฌ๋ ์ง๋ํ์ต ๊ธฐ๋ฐ ์ฅ์ ๋ฌผ ํ์ง๊ธฐ๋ ํ์ตํ ์ฅ์ ๋ฌผ์ ์ฐพ๋๋ค๋ ๋ฐฉ๋ฒ๋ก ์ ํ๊ณ๋ฅผ ์ง๋๊ณ ์๋ค. ๊ทธ๋ฌ๋ ์ค์ ์ฃผํ์ํฉ์์๋ ๋ฏธ์ฒ ํ์ตํ์ง ๋ชปํ ๋ฌผ์ฒด๋ฅผ ๋ง์ฃผํ๊ฑฐ๋ ์ฌ์ง์ด ํ์ตํ ๋ฌผ์ฒด๋ ๋์น ์ ์๋ค. ์ธ์ง ๋ชจ๋์ 1๋จ๊ณ์์ ์ด๋ฌํ ์ง๋ํ์ต์ ๋ฐฉ๋ฒ๋ก ์ ํ๊ณ์ ๋์ฒํ๊ธฐ ์ํด ํฌ์ธํธ ํด๋ผ์ฐ๋๋ฅผ ์ผ์ ํ ๊ฐ๊ฒฉ์ผ๋ก ๊ตฌ์ฑ๋ 3D ๋ณต์
(voxel)๋ก ๋ถํ ํ๊ณ , ์ด๋ก๋ถํฐ ๋น์ ์ง์ ๋ค์ ์ถ์ถํ ๋ค ๋ฏธ์ง์ ๋ฌผ์ฒด(Unknown object)๋ฅผ ํ์งํ๋ค. 2๋จ๊ณ์์๋ ๊ฐ ๋ณต์
์ ํน์ฑ์ ์ถ์ถ ๋ฐ ํ์ตํ๊ณ ๋คํธ์ํฌ๋ฅผ ํ์ต์ํด์ผ๋ก์จ ๊ฐ์ฒด ๊ฐ์ง๊ธฐ๋ฅผ ๊ตฌ์ฑํ๋ค. ๋ง์ง๋ง 3๋จ๊ณ์์๋ ์นผ๋ง ํํฐ์ ํ๊ฐ๋ฆฌ์ ์๊ณ ๋ฆฌ์ฆ์ ํ์ฉํ ๋ค์ค ๊ฐ์ฒด ํ์ง๊ธฐ๋ฅผ ์ ์ํ๋ค. ์ด๋ ๊ฒ ๊ตฌ์ฑ๋ ์ธ์ง ๋ชจ๋์ ๋น์ง๋ฉด ์ ๋ค์ ์ถ์ถํ์ฌ ํ์ตํ์ง ์์ ๋ฌผ์ฒด์ ๋ํด์๋ ๋ฏธ์ง์ ๋ฌผ์ฒด(Unknown object)๋ก ๊ฐ์งํ์ฌ ์ค์๊ฐ์ผ๋ก ์ฅ์ ๋ฌผ ํ์ง๊ธฐ๋ฅผ ๋ณด์ํ๋ค. ์ต๊ทผ ๋ผ์ด๋ค๋ฅผ ํ์ฉํ ์์จ์ฃผํ ์ฉ ๊ฐ์ฒด ํ์ง๊ธฐ์ ๋ํ ์ฐ๊ตฌ๊ฐ ํ๋ฐํ ์งํ๋๊ณ ์์ผ๋ ๋๋ถ๋ถ์ ์ฐ๊ตฌ๋ค์ ๋จ์ผ ํ๋ ์์ ๋ฌผ์ฒด ์ธ์์ ๋ํด ์ง์คํ์ฌ ์ ํ๋๋ฅผ ์ฌ๋ฆฌ๋ ๋ฐ ์ง์คํ๊ณ ์๋ค. ๊ทธ๋ฌ๋ ์ด๋ฌํ ์ฐ๊ตฌ๋ ๊ฐ์ง ์ค์๋์ ํ๋ ์ ๊ฐ์ ๊ฐ์ง ์ฐ์์ฑ ๋ฑ์ ๋ํ ๊ณ ๋ ค๊ฐ ๋์ด์์ง ์๋ค๋ ํ๊ณ์ ์ด ์กด์ฌํ๋ค. ๋ณธ ๋
ผ๋ฌธ์์๋ ์ค์๊ฐ ์ฑ๋ฅ์ ์ป๊ธฐ ์ํด ์ด๋ฌํ ๋ถ๋ถ์ ๊ณ ๋ คํ ์ฑ๋ฅ ์ง์๋ฅผ ์ ์ํ๊ณ , ์ค์ฐจ ์คํ์ ํตํด ์ ์ํ ์ธ์ง ๋ชจ๋์ ํ
์คํธ, ์ ์ํ ์ฑ๋ฅ ์ง์๋ฅผ ํตํด ํ๊ฐํ์๋ค.In recent few years, the interest in automotive researches on autonomous driving system has been grown up due to advances in sensing technologies and computer science. In the development of autonomous driving system, knowledge about the subject vehicles surroundings is the most essential function for safe and reliable driving. When it comes to making decisions and planning driving scenarios, to know the location and movements of surrounding objects and to distinguish whether an object is a car or pedestrian give valuable information to the autonomous driving system. In the autonomous driving system, various sensors are used to understand the surrounding environment. Since LiDAR gives the distance information of surround objects,
it has been the one of the most commonly used sensors in the development of perception system.
Despite achievement of the deep neural network research field, its application and research trends on 3D object detection using LiDAR point cloud tend to pursue higher accuracy without considering a practical application. A deep neural-network-based perception module heavily depends on the training dataset, but it is impossible to cover all the possibilities and corner cases. To apply the perception module in actual driving, it needs to detect unknown objects and unlearned objects, which may face on the road. To cope with these
problems, in this dissertation, a perception module using LiDAR point cloud is proposed, and its performance is validated via real vehicle test. The whole framework is composed of three stages : stage-1 for the ground estimation playing as a mask for point filtering which are considered as non-ground and stage-2 for feature extraction and object detection, and stage-3 for object tracking. In the first stage, to cope with the methodological limit of supervised learning that only finds learned object, we divide a point cloud into equally spaced 3D voxels the point cloud and extract non-ground points and cluster the points to detect unknown objects. In the second stage, the voxelization is utilized to learn the characteristics of point clouds organized in vertical columns. The trained network can distinguish the object through the extracted features from point clouds. In non-maximum suppression process, we sort the predictions according to IoU between prediction and polygon to select a prediction close to the actual heading angle of the object. The last stage presents a 3D multiple object tracking solution. Through Kalman filter, the learned and unlearned objects next movement is predicted and this prediction updated by measurement detection. Through this process, the proposed object detector complements the detector based on supervised learning by detecting the unlearned object as an unknown object through non-ground point extraction. Recent researches on object detection for autonomous driving have been actively conducted, but recent works tend to focus more on the recognition of the objects at every single frame and developing accurate system. To obtain a real-time performance, this paper focuses on more practical aspects by propose a performance index considering detection priority and detection continuity. The performance of the proposed algorithm has been investigated via real-time vehicle test.Chapter 1 Introduction 1
1.1. Background and Motivation 1
1.2. Overview and Previous Researches 4
1.3. Thesis Objectives 12
1.4. Thesis Outline 14
Chapter 2 Overview of a Perception in Automated Driving 15
Chapter 3 Object Detector 18
3.1. Voxelization & Feature Extraction 22
3.2. Backbone Network 25
3.3. Detection Head & Loss Function Design 28
3.4. Loss Function Design 30
3.5. Data Augmentation 33
3.6. Post Process 39
Chapter 4 Non-Ground Point Clustering 42
4.1. Previous Researches for Ground Removal 44
4.2. Non-Ground Estimation using Voxelization 45
4.3. Non-ground Object Segmentation 50
4.3.1. Object Clustering 52
4.3.2. Bounding Polygon 55
Chapter 5 . Object Tracking 57
5.1. State Prediction and Update 58
5.2. Data Matching Association 60
Chapter 6 Test result for KITTI dataset 62
6.1. Quantitative Analysis 62
6.2. Qualitative Analysis 72
6.3. Additional Training 76
6.3.1. Additional data acquisition 78
6.3.2. Qualitative Analysis 81
Chapter 7 Performance Evaluation 85
7.1. Current Evaluation Metrics 85
7.2. Limitations of Evaluation Metrics 87
7.2.1. Detection Continuity 87
7.2.2. Detection Priority 89
7.3. Criteria for Performance Index 91
Chapter 8 Vehicle Tests based Performance Evaluation 95
8.1. Configuration of Vehicle Tests 95
8.2. Qualitative Analysis 100
8.3. Quantitative Analysis 105
Chapter 9 Conclusions and Future Works 107
Bibliography 109
๊ตญ๋ฌธ ์ด๋ก 114Docto
Positioning Commuters And Shoppers Through Sensing And Correlation
Positioning is a basic and important need in many scenarios of human daily activities. With position information, multifarious services could be vitalized to benefit all kinds of users, from individuals to organizations. Through positioning, people are able to obtain not only geo-location but also time related information. By aggregating position information from individuals, organizations could derive statistical knowledge about group behaviors, such as traffic, business, event, etc.
Although enormous effort has been invested in positioning related academic and industrial work, there are still many holes to be filled. This dissertation proposes solutions to address the need of positioning in peopleโs daily life from two aspects: transportation and shopping. All the solutions are smart-device-based (e.g. smartphone, smartwatch), which could potentially benefit most users considering the prevalence of smart devices.
In positioning relevant activities, the components and their movement information could be sensed by different entities from diverse perspectives. The mechanisms presented in this dissertation treat the information collected from one perspective as reference and match it against the data collected from other perspectives to acquire absolute or relative position, in spatial as well as temporal dimension.
For transportation, both driver and passenger oriented solutions are proposed. To help drivers improve safety and ease the tension from driving, two correlated systems, OmniView [1] and DriverTalk [2], are provided. These systems infer the relative positions of the vehicles moving together by matching the appearance images of the vehicles seen by each other, which help drivers maintain safe distance from surrounding vehicles and also give them opportunities to precisely convey driving related messages to targeted peer drivers.
To improve bus-riding experience for passengers of public transit systems, a system named RideSense [3] is developed. This system correlates the sensor traces collected by both passengersโ smart devices and reference devices in buses to position passengersโ bus-riding, spatially and temporally. With this system, passengers could be billed without any explicit interaction with conventional ticketing facilities in bus system, which makes the transportation system more efficient.
For shopping activities, AutoLabel [4, 5] comes into play, which could position customers with regard to stores. AutoLabel constructs a mapping between WiFi vectors and semantic names of stores through correlating the text decorated inside stores with those on storesโ websites. Later, through WiFi scanning and a lookup in the mapping, customersโ smart devices could automatically recognize the semantic names of the stores they are in or nearby. Therefore, AutoLabel-enabled smart device serves as a bridge for the information flow between business owners and customers, which could benefit both sides
An intelligent multi-floor mobile robot transportation system in life science laboratories
In this dissertation, a new intelligent multi-floor transportation system based on mobile robot is presented to connect the distributed laboratories in multi-floor environment. In the system, new indoor mapping and localization are presented, hybrid path planning is proposed, and an automated doors management system is presented. In addition, a hybrid strategy with innovative floor estimation to handle the elevator operations is implemented. Finally the presented system controls the working processes of the related sub-system. The experiments prove the efficiency of the presented system
HPM-Frame: A Decision Framework for Executing Software on Heterogeneous Platforms
Heterogeneous computing is one of the most important computational solutions
to meet rapidly increasing demands on system performance. It typically allows
the main flow of applications to be executed on a CPU while the most
computationally intensive tasks are assigned to one or more accelerators, such
as GPUs and FPGAs. The refactoring of systems for execution on such platforms
is highly desired but also difficult to perform, mainly due the inherent
increase in software complexity. After exploration, we have identified a
current need for a systematic approach that supports engineers in the
refactoring process -- from CPU-centric applications to software that is
executed on heterogeneous platforms. In this paper, we introduce a decision
framework that assists engineers in the task of refactoring software to
incorporate heterogeneous platforms. It covers the software engineering
lifecycle through five steps, consisting of questions to be answered in order
to successfully address aspects that are relevant for the refactoring
procedure. We evaluate the feasibility of the framework in two ways. First, we
capture the practitioner's impressions, concerns and suggestions through a
questionnaire. Then, we conduct a case study showing the step-by-step
application of the framework using a computer vision application in the
automotive domain.Comment: Manuscript submitted to the Journal of Systems and Softwar
- โฆ