4,534 research outputs found
An Overview about Emerging Technologies of Autonomous Driving
Since DARPA started Grand Challenges in 2004 and Urban Challenges in 2007,
autonomous driving has been the most active field of AI applications. This
paper gives an overview about technical aspects of autonomous driving
technologies and open problems. We investigate the major fields of self-driving
systems, such as perception, mapping and localization, prediction, planning and
control, simulation, V2X and safety etc. Especially we elaborate on all these
issues in a framework of data closed loop, a popular platform to solve the long
tailed autonomous driving problems
A Language Agent for Autonomous Driving
Human-level driving is an ultimate goal of autonomous driving. Conventional
approaches formulate autonomous driving as a perception-prediction-planning
framework, yet their systems do not capitalize on the inherent reasoning
ability and experiential knowledge of humans. In this paper, we propose a
fundamental paradigm shift from current pipelines, exploiting Large Language
Models (LLMs) as a cognitive agent to integrate human-like intelligence into
autonomous driving systems. Our approach, termed Agent-Driver, transforms the
traditional autonomous driving pipeline by introducing a versatile tool library
accessible via function calls, a cognitive memory of common sense and
experiential knowledge for decision-making, and a reasoning engine capable of
chain-of-thought reasoning, task planning, motion planning, and
self-reflection. Powered by LLMs, our Agent-Driver is endowed with intuitive
common sense and robust reasoning capabilities, thus enabling a more nuanced,
human-like approach to autonomous driving. We evaluate our approach on the
large-scale nuScenes benchmark, and extensive experiments substantiate that our
Agent-Driver significantly outperforms the state-of-the-art driving methods by
a large margin. Our approach also demonstrates superior interpretability and
few-shot learning ability to these methods. Code will be released.Comment: Project Page: https://usc-gvl.github.io/Agent-Driver
Applications of Large Scale Foundation Models for Autonomous Driving
Since DARPA Grand Challenges (rural) in 2004/05 and Urban Challenges in 2007,
autonomous driving has been the most active field of AI applications. Recently
powered by large language models (LLMs), chat systems, such as chatGPT and
PaLM, emerge and rapidly become a promising direction to achieve artificial
general intelligence (AGI) in natural language processing (NLP). There comes a
natural thinking that we could employ these abilities to reformulate autonomous
driving. By combining LLM with foundation models, it is possible to utilize the
human knowledge, commonsense and reasoning to rebuild autonomous driving
systems from the current long-tailed AI dilemma. In this paper, we investigate
the techniques of foundation models and LLMs applied for autonomous driving,
categorized as simulation, world model, data annotation and planning or E2E
solutions etc.Comment: 23 pages. A survey pape
Framework for data quality in knowledge discovery tasks
Actualmente la explosión de datos es tendencia en el universo digital debido a los
avances en las tecnologías de la información. En este sentido, el descubrimiento
de conocimiento y la minería de datos han ganado mayor importancia debido a
la gran cantidad de datos disponibles. Para un exitoso proceso de descubrimiento
de conocimiento, es necesario preparar los datos. Expertos afirman que la fase de
preprocesamiento de datos toma entre un 50% a 70% del tiempo de un proceso de
descubrimiento de conocimiento.
Herramientas software basadas en populares metodologías para el descubrimiento
de conocimiento ofrecen algoritmos para el preprocesamiento de los datos.
Según el cuadrante mágico de Gartner de 2018 para ciencia de datos y plataformas
de aprendizaje automático, KNIME, RapidMiner, SAS, Alteryx, y H20.ai son las
mejores herramientas para el desucrimiento del conocimiento. Estas herramientas
proporcionan diversas técnicas que facilitan la evaluación del conjunto de datos,
sin embargo carecen de un proceso orientado al usuario que permita abordar los
problemas en la calidad de datos. Adem´as, la selección de las técnicas adecuadas
para la limpieza de datos es un problema para usuarios inexpertos, ya que estos
no tienen claro cuales son los métodos más confiables.
De esta forma, la presente tesis doctoral se enfoca en abordar los problemas
antes mencionados mediante: (i) Un marco conceptual que ofrezca un proceso
guiado para abordar los problemas de calidad en los datos en tareas de descubrimiento
de conocimiento, (ii) un sistema de razonamiento basado en casos
que recomiende los algoritmos adecuados para la limpieza de datos y (iii) una ontología que representa el conocimiento de los problemas de calidad en los datos
y los algoritmos de limpieza de datos. Adicionalmente, esta ontología contribuye
en la representacion formal de los casos y en la fase de adaptación, del sistema de
razonamiento basado en casos.The creation and consumption of data continue to grow by leaps and bounds. Due
to advances in Information and Communication Technologies (ICT), today the
data explosion in the digital universe is a new trend. The Knowledge Discovery
in Databases (KDD) gain importance due the abundance of data. For a successful
process of knowledge discovery is necessary to make a data treatment. The
experts affirm that preprocessing phase take the 50% to 70% of the total time of
knowledge discovery process.
Software tools based on Knowledge Discovery Methodologies offers algorithms
for data preprocessing. According to Gartner 2018 Magic Quadrant for
Data Science and Machine Learning Platforms, KNIME, RapidMiner, SAS, Alteryx
and H20.ai are the leader tools for knowledge discovery. These software
tools provide different techniques and they facilitate the evaluation of data analysis,
however, these software tools lack any kind of guidance as to which techniques
can or should be used in which contexts. Consequently, the use of suitable data
cleaning techniques is a headache for inexpert users. They have no idea which
methods can be confidently used and often resort to trial and error.
This thesis presents three contributions to address the mentioned problems:
(i) A conceptual framework to provide the user a guidance to address data quality
issues in knowledge discovery tasks, (ii) a Case-based reasoning system to
recommend the suitable algorithms for data cleaning, and (iii) an Ontology that
represent the knowledge in data quality issues and data cleaning methods. Also,
this ontology supports the case-based reasoning system for case representation
and reuse phase.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Fernando Fernández Rebollo.- Secretario: Gustavo Adolfo Ramírez.- Vocal: Juan Pedro Caraça-Valente Hernánde
Beyond Prediction: On-street Parking Recommendation using Heterogeneous Graph-based List-wise Ranking
To provide real-time parking information, existing studies focus on
predicting parking availability, which seems an indirect approach to saving
drivers' cruising time. In this paper, we first time propose an on-street
parking recommendation (OPR) task to directly recommend a parking space for a
driver. To this end, a learn-to-rank (LTR) based OPR model called OPR-LTR is
built. Specifically, parking recommendation is closely related to the "turnover
events" (state switching between occupied and vacant) of each parking space,
and hence we design a highly efficient heterogeneous graph called ESGraph to
represent historical and real-time meters' turnover events as well as
geographical relations; afterward, a convolution-based event-then-graph network
is used to aggregate and update representations of the heterogeneous graph. A
ranking model is further utilized to learn a score function that helps
recommend a list of ranked parking spots for a specific on-street parking
query. The method is verified using the on-street parking meter data in Hong
Kong and San Francisco. By comparing with the other two types of methods:
prediction-only and prediction-then-recommendation, the proposed
direct-recommendation method achieves satisfactory performance in different
metrics. Extensive experiments also demonstrate that the proposed ESGraph and
the recommendation model are more efficient in terms of computational
efficiency as well as saving drivers' on-street parking time
Occupancy Patterns Scoping Review Project
Understanding the occupancy and heating patterns of UK domestic consumers is important for understanding the role of demand-side technologies, such as occupancy-based smart heating controls to manage energy consumption more efficiently.The research undertakes a systematic scoping review to identify and assess the quality of the UK and international evidence on occupancy patterns, to critically review the common methods of measuring occupancy, and to discuss the potential role of occupancy-based smart heating controls in meeting energy savings, thermal comfort and usability requirements.This report was prepared by a team at the University of Southampton and commissioned by the former Department of Energy and Climate Change (DECC).<br/
From Compute to Data: Across-the-Stack System Design for Intelligent Applications
Intelligent applications such as Apple Siri, Google Assistant and Amazon Alexa have gained tremendous popularity in recent years. With human-like understanding capabilities and natural language interface, this class of applications is quickly becoming people’s preferred way of interacting with their mobile, wearable and smart home devices. There have been considerable advancement in machine learning research that aim to further enhance the understanding capability of intelligent applications, however there exist significant roadblocks in applying state-of-the-art algorithms and techniques to a real-world use case. First, as machine learning algorithms becomes more sophisticated, it imposes higher computation requirements for the underlying software and hardware system to process intelligent application request efficiently. Second, state-of-the-art algorithms and techniques is not guaranteed to provide the same level of prediction and classification accuracy when applied to tasks required in real-world intelligent applications, which are often different and more complex than what are studied in a research environment.
This dissertation addresses these roadblocks by investigating the key challenges across multiple components in an intelligent application system. Specifically, we identify the key compute and data challenges and presents system design and techniques. To improve the computational performance of the hardware and software system, we challenge the status-quo approach of cloud-only intelligent application processing and propose computation partitioning strategies that effectively leverage both the cycles in the cloud and on the mobile device to achieve low latency, low energy consumption and high datacenter throughput. We characterize and taxonomize state-of-the- art deep learning based natural language processing (NLP) applications to identify the algorithmic design elements and computational patterns that render conventional GPU acceleration techniques ineffective on this class of applications. Leveraging their unique characteristics, we design and implement a novel fine-grain cross-input batching techniques for providing GPU acceleration to a number of state-of-the-art NLP applications. For the data component, large scale and effective training data, in addition to algorithm, is necessary to achieve high prediction accuracy. We investigate the challenge of effective large-scale training data collection via crowdsourcing. We propose novel metrics to evaluate the quality of training data for building real-word intelligent application systems. We leverage this methodology to study the trade-off of multiple crowdsourcing methods and provide recommendations on best training data crowdsourcing practices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145886/1/ypkang_1.pd
A Smart Products Lifecycle Management (sPLM) Framework - Modeling for Conceptualization, Interoperability, and Modularity
Autonomy and intelligence have been built into many of today’s mechatronic products, taking advantage of low-cost sensors and advanced data analytics technologies. Design of product intelligence (enabled by analytics capabilities) is no longer a trivial or additional option for the product development. The objective of this research is aimed at addressing the challenges raised by the new data-driven design paradigm for smart products development, in which the product itself and the smartness require to be carefully co-constructed.
A smart product can be seen as specific compositions and configurations of its physical components to form the body, its analytics models to implement the intelligence, evolving along its lifecycle stages. Based on this view, the contribution of this research is to expand the “Product Lifecycle Management (PLM)” concept traditionally for physical products to data-based products. As a result, a Smart Products Lifecycle Management (sPLM) framework is conceptualized based on a high-dimensional Smart Product Hypercube (sPH) representation and decomposition.
First, the sPLM addresses the interoperability issues by developing a Smart Component data model to uniformly represent and compose physical component models created by engineers and analytics models created by data scientists. Second, the sPLM implements an NPD3 process model that incorporates formal data analytics process into the new product development (NPD) process model, in order to support the transdisciplinary information flows and team interactions between engineers and data scientists. Third, the sPLM addresses the issues related to product definition, modular design, product configuration, and lifecycle management of analytics models, by adapting the theoretical frameworks and methods for traditional product design and development.
An sPLM proof-of-concept platform had been implemented for validation of the concepts and methodologies developed throughout the research work. The sPLM platform provides a shared data repository to manage the product-, process-, and configuration-related knowledge for smart products development. It also provides a collaborative environment to facilitate transdisciplinary collaboration between product engineers and data scientists
A Microscopic Simulation Laboratory for Evaluation of Off-street Parking Systems
The parking industry produces an enormous amount of data every day that, properly analyzed, will change the way the industry operates. The collected data form patterns that, in most cases, would allow parking operators and property owners to better understand how to maximize revenue and decrease operating expenses and support the decisions such as how to set specific parking policies (e.g. electrical charging only parking space) to achieve the sustainable and eco-friendly parking.
However, there lacks an intelligent tool to assess the layout design and operational performance of parking lots to reduce the externalities and increase the revenue. To address this issue, this research presents a comprehensive agent-based framework for microscopic off-street parking system simulation. A rule-based parking simulation logic programming model is formulated. The proposed simulation model can effectively capture the behaviors of drivers and pedestrians as well as spatial and temporal interactions of traffic dynamics in the parking system. A methodology for data collection, processing, and extraction of user behaviors in the parking system is also developed. A Long-Short Term Memory (LSTM) neural network is used to predict the arrival and departure of the vehicles. The proposed simulator is implemented in Java and a Software as a Service (SaaS) graphic user interface is designed to analyze and visualize the simulation results. This study finds the active capacity of the parking system, which is defined as the largest number of actively moving vehicles in the parking system under the facility layout. In the system application of the real world testbed, the numerical tests show (a) the smart check-in device has marginal benefits in vehicle waiting time; (b) the flexible pricing policy may increase the average daily revenue if the elasticity of the price is not involved; (c) the number of electrical charging only spots has a negative impact on the performance of the parking facility; and (d) the rear-in only policy may increase the duration of parking maneuvers and reduce the efficiency during the arrival rush hour. Application of the developed simulation system using a real-world case demonstrates its capability of providing informative quantitative measures to support decisions in designing, maintaining, and operating smart parking facilities
- …