505 research outputs found
End-to-end Autonomous Driving using Deep Learning: A Systematic Review
End-to-end autonomous driving is a fully differentiable machine learning
system that takes raw sensor input data and other metadata as prior information
and directly outputs the ego vehicle's control signals or planned trajectories.
This paper attempts to systematically review all recent Machine Learning-based
techniques to perform this end-to-end task, including, but not limited to,
object detection, semantic scene understanding, object tracking, trajectory
predictions, trajectory planning, vehicle control, social behavior, and
communications. This paper focuses on recent fully differentiable end-to-end
reinforcement learning and deep learning-based techniques. Our paper also
builds taxonomies of the significant approaches by sub-grouping them and
showcasing their research trends. Finally, this survey highlights the open
challenges and points out possible future directions to enlighten further
research on the topic.Comment: 11 pages, 6 figures, submitted in WACV conferenc
Big Data Coordination Platform: Full Proposal 2017-2022
This proposal for a Big Data and ICT Platform therefore focuses on enhancing CGIAR and partner capacity to deliver big data management, analytics and ICT-focused solutions to CGIAR target geographies and communities. The ultimate goal of the platform is to harness the capabilities of Big Data to accelerate and enhance the impact of international agricultural research. It will support CGIAR’s mission by creating an enabling environment where data are expertly managed and used effectively to strengthen delivery on CGIAR SRF’s System Level Outcome (SLO) targets. Critical gaps were identified during the extensive scoping consultations with CGIAR researchers and partners (provided in Annex 8). The Platform will achieve this through ambitious partnerships with initiatives and organizations outside CGIAR, both upstream and downstream, public and private. It will focus on promoting CGIAR-wide collaboration across CRPs and Centers, in addition to developing new partnership models with big data leaders at the global level. As a result, CGIAR and partner capacity will be enhanced, external partnerships will be leveraged, and an institutional culture of collaborative data management and analytics will be established. Important international public goods such as new global and regional datasets will be developed, alongside new methods that support CGIAR to use the data revolution as an additional means of delivering on SLOs
Technologies and Applications for Big Data Value
This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
Recommended from our members
Enhancing Usability and Explainability of Data Systems
The recent growth of data science expanded its reach to an ever-growing user base of nonexperts, increasing the need for usability, understandability, and explainability in these systems. Enhancing usability makes data systems accessible to people with different skills and backgrounds alike, leading to democratization of data systems. Furthermore, proper understanding of data and data-driven systems is necessary for the users to trust the function of the systems that learn from data. Finally, data systems should be transparent: when a data system behaves unexpectedly or malfunctions, the users deserve proper explanation of what caused the observed incident. Unfortunately, most existing data systems offer limited usability and support for explanations: these systems are usable only by experts with sound technical skills, and even expert users are hindered by the lack of transparency into the systems\u27 inner workings and functions. The aim of my thesis is to bridge the usability gap between nonexpert users and complex data systems, aid all sort of users, including the expert ones, in data and system understanding, and provide explanations that help reason about unexpected outcomes involving data systems. Specifically, my thesis has the following three goals: (1) enhancing usability of data systems for nonexperts, (2) enable data understanding that can assist users in a variety of tasks such as achieving trust in data-driven machine learning, gaining data understanding, and data cleaning, and (3) explaining causes of unexpected outcomes involving data and data systems.
For enhancing usability, we focus on example-driven user intent discovery. We develop systems based on example-driven interactions in two different settings: querying relational databases and personalized document summarization. Towards data understanding, we develop a new data-profiling primitive that can characterize tuples for which a machine-learned model is likely to produce untrustworthy predictions. We also develop an explanation framework to explain causes of such untrustworthy predictions. Additionally, this new data-profiling primitive enables interactive data cleaning. Finally, we develop two explanation frameworks, tailored to provide explanations in debugging data system components, including the data itself. The explanation frameworks focus on explaining the root cause of a concurrent application\u27s intermittent failure and exposing issues in the data that cause a data-driven system to malfunction
Simulation Intelligence: Towards a New Generation of Scientific Methods
The original "Seven Motifs" set forth a roadmap of essential methods for the
field of scientific computing, where a motif is an algorithmic method that
captures a pattern of computation and data movement. We present the "Nine
Motifs of Simulation Intelligence", a roadmap for the development and
integration of the essential algorithms necessary for a merger of scientific
computing, scientific simulation, and artificial intelligence. We call this
merger simulation intelligence (SI), for short. We argue the motifs of
simulation intelligence are interconnected and interdependent, much like the
components within the layers of an operating system. Using this metaphor, we
explore the nature of each layer of the simulation intelligence operating
system stack (SI-stack) and the motifs therein: (1) Multi-physics and
multi-scale modeling; (2) Surrogate modeling and emulation; (3)
Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based
modeling; (6) Probabilistic programming; (7) Differentiable programming; (8)
Open-ended optimization; (9) Machine programming. We believe coordinated
efforts between motifs offers immense opportunity to accelerate scientific
discovery, from solving inverse problems in synthetic biology and climate
science, to directing nuclear energy experiments and predicting emergent
behavior in socioeconomic settings. We elaborate on each layer of the SI-stack,
detailing the state-of-art methods, presenting examples to highlight challenges
and opportunities, and advocating for specific ways to advance the motifs and
the synergies from their combinations. Advancing and integrating these
technologies can enable a robust and efficient hypothesis-simulation-analysis
type of scientific method, which we introduce with several use-cases for
human-machine teaming and automated science
Operations Management
Global competition has caused fundamental changes in the competitive environment of the manufacturing and service industries. Firms should develop strategic objectives that, upon achievement, result in a competitive advantage in the market place. The forces of globalization on one hand and rapidly growing marketing opportunities overseas, especially in emerging economies on the other, have led to the expansion of operations on a global scale. The book aims to cover the main topics characterizing operations management including both strategic issues and practical applications. A global environmental business including both manufacturing and services is analyzed. The book contains original research and application chapters from different perspectives. It is enriched through the analyses of case studies
Technologies and Applications for Big Data Value
This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
Active Reinforcement Learning for the Semantic Segmentation of Images Captured by Mobile Sensors
Neural Networks have been employed to attain acceptable performance on semantic segmentation. To perform well, many supervised learning algorithms require a large amount of annotated data. Furthermore, real-world datasets are frequently severely unbalanced, resulting in poor detection of underrepresented classes. The annotation task requires time-consuming human labor. This thesis investigates the use of a reinforced active learning as region selection method to reduce human labor while achieving competitive results. A Deep Query Network (DQN) is utilized to identify the best strategy to label the most informative regions of the image. A Mean Intersection over Union (MIoU) training performance equivalent to 98% of the fully supervised segmentation network was achieved with labeling only 8% of dataset. Another 8% of labelled dataset was used for training the DQN. The performance of all three segmentation networks trained with regions selected by Frequency Weighted Average (FWA) IoU is better in comparison with baseline methods
- …