1,245 research outputs found

    Adaptive Perception, State Estimation, and Navigation Methods for Mobile Robots

    Get PDF
    In this cumulative habilitation, publications with focus on robotic perception, self-localization, tracking, navigation, and human-machine interfaces have been selected. While some of the publications present research on a PR2 household robot in the Robotics Learning Lab of the University of California Berkeley on vision and machine learning tasks, most of the publications present research results while working at the AutoNOMOS-Labs at Freie Universität Berlin, with focus on control, planning and object tracking for the autonomous vehicles "MadeInGermany" and "e-Instein"

    Hands on the wheel: a Dataset for Driver Hand Detection and Tracking

    Get PDF
    The ability to detect, localize and track the hands is crucial in many applications requiring the understanding of the person behavior, attitude and interactions. In particular, this is true for the automotive context, in which hand analysis allows to predict preparatory movements for maneuvers or to investigate the driver’s attention level. Moreover, due to the recent diffusion of cameras inside new car cockpits, it is feasible to use hand gestures to develop new Human-Car Interaction systems, more user-friendly and safe. In this paper, we propose a new dataset, called Turms, that consists of infrared images of driver’s hands, collected from the back of the steering wheel, an innovative point of view. The Leap Motion device has been selected for the recordings, thanks to its stereo capabilities and the wide view-angle. Besides, we introduce a method to detect the presence and the location of driver’s hands on the steering wheel, during driving activity tasks

    A Steering Wheel Mounted Grip Sensor: Design, Development and Evaluation

    Get PDF
    Department of Human Factors EngineeringDriving is a commonplace but safety critical daily activity for billions of people. It remains one of the leading causes of death worldwide, particularly in younger adults. In the last decades, a wide range of technologies, such as intelligent braking or speed regulating systems, have been integrated into vehicles to improve safetyannually decreasing death rates testify to their success. A recent research focus in this area has been in the development of systems that sense human states or activities during driving. This is valuable because human error remains a key reason underlying many vehicle accidents and incidents. Technologies that can intervene in response to information sensed about a driver may be able to detect, predict and ultimately prevent problems before they progress into accidents, thus avoiding the occurrence of critical situations rather than just mitigating their consequences. Commercial examples of this kind of technology include systems that monitor driver alertness or lane holding and prompt drivers who are sleepy or drifting off-lane. More exploratory research in this area has sought to capture emotional state or stress/workload levels via physiological measurements of Heart Rate Variability (HRV), Electrocardiogram (ECG) and Electroencephalogram (EEG), or behavioral measurements of eye gaze or face pose. Other research has monitored explicitly user actions, such as head pose or foot movements to infer intended actions (such as overtaking or lane change) and provide automatic assessments of the safety of these future behaviors ??? for example, providing a timely warning to a driver who is planning to overtake about a vehicle in his or her blind spot. Researchers have also explored how sensing hands on the wheel can be used to infer a driver???s presence, identity or emotional state. This thesis extends this body of work through the design, development and evaluation of a steering wheel sensor platform that can directly detect a driver???s hand pose all around a steering wheel. This thesis argues that full steering hand pose is a potentially rich source of information about a driver???s intended actions. For example, it proposes a link between hand posture on the wheel and subsequent turning or lane change behavior. To explore this idea, this thesis describes the construction of a touch sensor in the form of a steering wheel cover. This cover integrates 32 equidistantly spread touch sensing electrodes (11.250 inter-sensor spacing) in the form of conductive ribbons (0.2" wide and 0.03" thick). Data from each ribbons is captured separately via a set of capacitive touch sensor microcontrollers every 64 ms. We connected this hardware platform to an OpenDS, an open source driving simulator and ran two studies capturing hand pose during a sequential lane change task and a slalom task. We analyzed the data to determine whether hand pose is a useful predictor of future turning behavior. For this we classified a 5-lane road into 4 turn sizes and used machine-learning recognizers to predict the future turn size from the change in hand posture in terms of hand movement properties from the early driving data. Driving task scenario of the first experiment was not appropriately matched with the real life turning task therefore we modified the scenario with more appropriate task in the second experiments. Class-wise prediction of the turn sizes for both experiments didn???t show good accuracy, however prediction accuracy was improved when the classes were reduced into two classes from four classes. In the experiment 2 turn sizes were overlapped between themselves, which made it very difficult to distinguish them. Therefore, we did continuous prediction as well and the prediction accuracy was better than the class-wise prediction system for the both experiments. In summary, this thesis designed, developed and evaluated a combined hardware and software system that senses the steering behavior of a driver by capturing grip pose. We assessed the value of this information via two studies that explored the relationship between wheel grip and future turning behaviors. The ultimate outcome of this study can inform the development of in car sensing systems to support safer driving.ope

    Design and Validation of a High-Level Controller for Automotive Active Systems

    Get PDF
    Active systems, from active safety to energy management, play a crucial role in the development of new road vehicles. However, the increasing number of controllers creates an important issue regarding complexity and system integration. This article proposes a high-level controller managing the individual active systems - namely, Torque Vectoring (TV), Active Aerodynamics, Active Suspension, and Active Safety (Anti-lock Braking System [ABS], Traction Control, and Electronic Stability Program [ESP]) - through a dynamic state variation. The high-level controller is implemented and validated in a simulation environment, with a series of tests, and evaluate the performance of the original design and the proposed high-level control. Then, a comparison of the Virtual Driver (VD) response and the Driver-in-the-Loop (DiL) behavior is performed to assess the limits between virtual simulation and real-driver response in a lap time condition. The main advantages of the proposed design methodology are its simplicity and overall cooperation of different active systems, where the proposed model was able to improve the vehicle behavior both in terms of safety and performance, giving more confidence to the driver when cornering and under braking. Some differences were discovered between the behavior of the VD and the DiL, especially regarding the sensitivity to external disturbances

    Impedance Modulation for Negotiating Control Authority in a Haptic Shared Control Paradigm

    Full text link
    Communication and cooperation among team members can be enhanced significantly with physical interaction. Successful collaboration requires the integration of the individual partners' intentions into a shared action plan, which may involve a continuous negotiation of intentions and roles. This paper presents an adaptive haptic shared control framework wherein a human driver and an automation system are physically connected through a motorized steering wheel. By virtue of haptic feedback, the driver and automation system can monitor each other actions and can still intuitively express their control intentions. The objective of this paper is to develop a systematic model for an automation system that can vary its impedance such that the control authority can transit between the two agents intuitively and smoothly. To this end, we defined a cost function that not only ensures the safety of the collaborative task but also takes account of the assistive behavior of the automation system. We employed a predictive controller based on modified least square to modulate the automation system impedance such that the cost function is optimized. The results demonstrate the significance of the proposed approach for negotiating the control authority, specifically when humans and automation are in a non-cooperative mode. Furthermore, the performance of the adaptive haptic shared control is compared with the traditional fixed automation impedance haptic shared control paradigm.Comment: Final Manuscript Accepted in the 2020 American Control Conference (ACC

    A Framework For Abstracting, Designing And Building Tangible Gesture Interactive Systems

    Get PDF
    This thesis discusses tangible gesture interaction, a novel paradigm for interacting with computer that blends concepts from the more popular fields of tangible interaction and gesture interaction. Taking advantage of the human innate abilities to manipulate physical objects and to communicate through gestures, tangible gesture interaction is particularly interesting for interacting in smart environments, bringing the interaction with computer beyond the screen, back to the real world. Since tangible gesture interaction is a relatively new field of research, this thesis presents a conceptual framework that aims at supporting future work in this field. The Tangible Gesture Interaction Framework provides support on three levels. First, it helps reflecting from a theoretical point of view on the different types of tangible gestures that can be designed, physically, through a taxonomy based on three components (move, hold and touch) and additional attributes, and semantically, through a taxonomy of the semantic constructs that can be used to associate meaning to tangible gestures. Second, it helps conceiving new tangible gesture interactive systems and designing new interactions based on gestures with objects, through dedicated guidelines for tangible gesture definition and common practices for different application domains. Third, it helps building new tangible gesture interactive systems supporting the choice between four different technological approaches (embedded and embodied, wearable, environmental or hybrid) and providing general guidance for the different approaches. As an application of this framework, this thesis presents also seven tangible gesture interactive systems for three different application domains, i.e., interacting with the In-Vehicle Infotainment System (IVIS) of the car, the emotional and interpersonal communication, and the interaction in a smart home. For the first application domain, four different systems that use gestures on the steering wheel as interaction means with the IVIS have been designed, developed and evaluated. For the second application domain, an anthropomorphic lamp able to recognize gestures that humans typically perform for interpersonal communication has been conceived and developed. A second system, based on smart t-shirts, recognizes when two people hug and reward the gesture with an exchange of digital information. Finally, a smart watch for recognizing gestures performed with objects held in the hand in the context of the smart home has been investigated. The analysis of existing systems found in literature and of the system developed during this thesis shows that the framework has a good descriptive and evaluative power. The applications developed during this thesis show that the proposed framework has also a good generative power.Questa tesi discute l’interazione gestuale tangibile, un nuovo paradigma per interagire con il computer che unisce i principi dei più comuni campi di studio dell’interazione tangibile e dell’interazione gestuale. Sfruttando le abilità innate dell’uomo di manipolare oggetti fisici e di comunicare con i gesti, l’interazione gestuale tangibile si rivela particolarmente interessante per interagire negli ambienti intelligenti, riportando l’attenzione sul nostro mondo reale, al di là dello schermo dei computer o degli smartphone. Poiché l’interazione gestuale tangibile è un campo di studio relativamente recente, questa tesi presenta un framework (quadro teorico) che ha lo scopo di assistere lavori futuri in questo campo. Il Framework per l’Interazione Gestuale Tangibile fornisce supporto su tre livelli. Per prima cosa, aiuta a riflettere da un punto di vista teorico sui diversi tipi di gesti tangibili che possono essere eseguiti fisicamente, grazie a una tassonomia basata su tre componenti (muovere, tenere, toccare) e attributi addizionali, e che possono essere concepiti semanticamente, grazie a una tassonomia di tutti i costrutti semantici che permettono di associare dei significati ai gesti tangibili. In secondo luogo, il framework proposto aiuta a concepire nuovi sistemi interattivi basati su gesti tangibili e a ideare nuove interazioni basate su gesti con gli oggetti, attraverso linee guida per la definizione di gesti tangibili e una selezione delle migliore pratiche per i differenti campi di applicazione. Infine, il framework aiuta a implementare nuovi sistemi interattivi basati su gesti tangibili, permettendo di scegliere tra quattro differenti approcci tecnologici (incarnato e integrato negli oggetti, indossabile, distribuito nell’ambiente, o ibrido) e fornendo una guida generale per la scelta tra questi differenti approcci. Come applicazione di questo framework, questa tesi presenta anche sette sistemi interattivi basati su gesti tangibili, realizzati per tre differenti campi di applicazione: l’interazione con i sistemi di infotainment degli autoveicoli, la comunicazione interpersonale delle emozioni, e l’interazione nella casa intelligente. Per il primo campo di applicazione, sono stati progettati, sviluppati e testati quattro differenti sistemi che usano gesti tangibili effettuati sul volante come modalità di interazione con il sistema di infotainment. Per il secondo campo di applicazione, è stata concepita e sviluppata una lampada antropomorfica in grado di riconoscere i gesti tipici dell’interazione interpersonale. Per lo stesso campo di applicazione, un secondo sistema, basato su una maglietta intelligente, riconosce quando due persone si abbracciano e ricompensa questo gesto con uno scambio di informazioni digitali. Infine, per l’interazione nella casa intelligente, è stata investigata la realizzazione di uno smart watch per il riconoscimento di gesti eseguiti con oggetti tenuti nella mano. L’analisi dei sistemi interattivi esistenti basati su gesti tangibili permette di dimostrare che il framework ha un buon potere descrittivo e valutativo. Le applicazioni sviluppate durante la tesi mostrano che il framework proposto ha anche un valido potere generativo

    Parameter tuning and cooperative control for automated guided vehicles

    Get PDF
    For several practical control engineering applications it is desirable that multiple systems can operate independently as well as in cooperation with each other. Especially when the transition between individual and cooperative behavior and vice versa can be carried out easily, this results in ??exible and scalable systems. A subclass is formed by systems that are physically separated during individual operation, and very tightly coupled during cooperative operation. One particular application of multiple systems that can operate independently as well as in concert with each other is the cooperative transportation of a large object by multiple Automated Guided Vehicles (AGVs). AGVs are used in industry to transport all kinds of goods, ranging from small trays of compact and video discs to pallets and 40-tonne coils of steel. Current applications typically comprise a ??eet of AGVs, and the vehicles transport products on an individual basis. Recently there has been an increasing demand to transport very large objects such as sewer pipes, rotor blades of wind turbines and pieces of scenery for theaters, which may reach lengths of over thirty meters. A realistic option is to let several AGVs operate together to handle these types of loads. This Ph.D. thesis describes the development, implementation, and testing of distributed control algorithms for transporting a load by two or more Automated Guided Vehicles in industrial environments. We focused on the situations where the load is connected to the AGVs by means of (semi-)rigid interconnections. Attention was restricted to control on the velocity level, which we regard as an intermediate step for achieving fully automatic operation. In our setup the motion setpoint is provided by an external host. The load is assumed to be already present on the vehicles. Docking and grasping procedures are not considered. The project is a collaboration between the company FROG Navigation Systems (Utrecht, The Netherlands) and the Control Systems group of the Technische Universiteit Eindhoven. FROG provided testing facilities including two omni-directional AGVs. Industrial AGVs are custom made for the transportation tasks at hand and come in a variety of forms. To reduce development times it is desirable to follow a model-based control design approach as this allows generalization to a broad class of vehicles. We have adopted rigid body modeling techniques from the ??eld of robotic manipulators to derive the equations of motion for the AGVs and load in a systematic way. These models are based on physical considerations such as Newton's second law and the positions and dimensions of the wheels, sensors, and actuators. Special emphasis is put on the modeling of the wheel-??oor interaction, for which we have adopted tire models that stem from the ??eld of vehicle dynamics. The resulting models have a clear physical interpretation and capture a large class of vehicles with arbitrary wheel con??gurations. This ensures us that the controllers, which are based on these models, are applicable to a broad class of vehicles. An important prerequisite for achieving smooth cooperative behavior is that the individual AGVs operate at the required accuracy. The performance of an individual AGV is directly related to the precision of the estimates for the odometric parameters, i.e. the effective wheel diameters and the offsets of the encoders that measure the steering angles of the wheels. Cooperative transportation applications will typically require AGVs that are highly maneuverable, which means that all the wheels of an individual AGV ahould be able to steer. Since there will be more than one steering angle encoder, the identi??cation of the odometric parameters is substantially more dif??cult for these omni-directional AGVs than for the mobile wheeled robots that are commonly seen in literature and laboratory settings. In this thesis we present a novel procedure for simultaneously estimating effective wheel diameters and steering angle encoder offsets by driving several pure circle segments. The validity of the tuning procedure is con??rmed by experiments with the two omni-directional test vehicles with varying loads. An interesting result is that the effective wheel diameters of the rubber wheels of our AGVs increase with increasing load. A crucial aspect in all control designs is the reconstruction of the to-be-controlled variables from measurement data. Our to-be-controlled variables are the planar motion of the load and the motions of the AGVs with respect to the load, which have to be reconstruct from the odometric sensor information. The odometric sensor information consists of the drive encoder and steering encoder readings. We analyzed the observability of an individual AGV and proved that it is theoretically possible to reconstruct its complete motion from the odometric measurements. Due to practical considerations, we pursued a more pragmatic least-squares based observer design. We show that the least-squares based motion estimate is independent of the coordinate system that is being used. The motion estimator was subsequently analyzed in a stochastic setting. The relation between the motion estimator and the estimated velocity of an arbitrary point on the vehicle was explored. We derived how the covariance of the velocity estimate of an arbitrary point on the vehicle is related to the covariance of the motion estimate. We proved that there is one unique point on the vehicle for which the covariance of the estimated velocity is minimal. Next, we investigated how the local motion estimates of the individual AGVs can be combined to yield one global estimate. When the load and AGVs are rigidly interconnected, it suf??ces that each AGVs broadcasts its local motion estimate and receives the estimates of the other AGVs. When the load is semi-rigidly interconnected to the AGVs, e.g. by means of revolute or prismatic joints, then generally each AGV needs to broadcasts the corresponding information matrix as well. We showed that the information matrix remains constant when the load is connected to the AGV with a revolute joint that is mounted at the aforementioned unique point with the smallest velocity estimate covariance. This means that the corresponding AGV does not have to broadcast its information matrix for this special situation. The key issue in the control design for cooperative transportation tasks is that the various AGVs must not counteract each others' actions. The decentralized controller that we derived makes the AGVs track an externally provided planar motion setpoint while minimizing the interconnection forces between the load and the vehicles. Although the control design is applicable to cooperative transportation by multiple AGVs with arbitrary semi-rigid AGV-load interconnections, it is noteworthy that a particularly elegant solution arises when all interconnections are completely rigid. Then the derived local controllers have the same structure as the controllers that are normally used for individual operation. As a result, changing a few parameter settings and providing the AGVs with identical setpoints is all that is required to achieve cooperative behavior on the velocity level for this situation. The observer and controller designs for the case that the AGVs are completely rigidly interconnected to the load were successfully implemented on the two test vehicles. Experi ments were carried out with and without a load that consisted of a pallet with 300 kg pave stones. The results were reproducible and illustrated the practical validity of the observer and controller designs. There were no substantial drawbacks when the local observers used only their local sensor information, which means that our setup can also operate satisfactory when the velocity estimates are not shared with the other vehicles

    Autonomous Control and Automotive Simulator Based Driver Training Methodologies for Vehicle Run-Off-Road and Recovery Events

    Get PDF
    Traffic fatalities and injuries continue to demand the attention of researchers and governments across the world as they remain significant factors in public health and safety. Enhanced legislature along with vehicle and roadway technology has helped to reduce the impact of traffic crashes in many scenarios. However, one specifically troublesome area of traffic safety, which persists, is run-off-road (ROR) where a vehicle\u27s wheels leave the paved portion of the roadway and begin traveling on the shoulder or side of the road. Large percentages of fatal and injury traffic crashes are attributable to ROR. One of the most critical reasons why ROR scenarios quickly evolve into serious crashes is poor driver performance. Drivers are unprepared to safely handle the situation and often execute dangerous maneuvers, such as overcorrection or sudden braking, which can lead to devastating results. Currently implemented ROR countermeasures such as roadway infrastructure modifications and vehicle safety systems have helped to mitigate some ROR events but remain limited in their approach. A complete solution must directly address the primary factor contributing to ROR crashes which is driver performance errors. Four vehicle safety control systems, based on sliding control, linear quadratic, state flow, and classical theories, were developed to autonomously recover a vehicle from ROR without driver intervention. The vehicle response was simulated for each controller under a variety of common road departure and return scenarios. The results showed that the linear quadratic and sliding control methodologies outperformed the other controllers in terms of overall stability. However, the linear quadratic controller was the only design to safely recover the vehicle in all of the simulation conditions examined. On average, it performed the recovery almost 50 percent faster and with 40 percent less lateral error than the sliding controller at the expense of higher yaw rates. The performance of the linear quadratic and sliding algorithms was investigated further to include more complex vehicle modeling, state estimation techniques, and sensor measurement noise. The two controllers were simulated amongst a variety of ROR conditions where typical driver performance was inadequate to safely operate the vehicle. The sliding controller recovered the fastest within the nominal conditions but exhibited large variability in performance amongst the more extreme ROR scenarios. Despite some small sacrifice in lateral error and yaw rate, the linear quadratic controller demonstrated a higher level of consistency and stability amongst the various conditions examined. Overall, the linear quadratic controller recovered the vehicle 25 percent faster than the sliding controller while using 70 percent less steering, which combined with its robust performance, indicates its high potential as an autonomous ROR countermeasure. The present status of autonomous vehicle control research for ROR remains premature for commercial implementation; in the meantime, another countermeasure which directly addresses driver performance is driver education and training. An automotive simulator based ROR training program was developed to instruct drivers on how to perform a safe and effective recovery from ROR. A pilot study, involving seventeen human subject participants, was conducted to evaluate the effectiveness of the training program and whether the participants\u27 ROR recovery skills increased following the training. Based on specific evaluation criteria and a developed scoring system, it was shown that drivers did learn from the training program and were able to better utilize proper recovery methods. The pilot study also revealed that drivers improved their recovery scores by an average of 78 percent. Building on the success observed in the pilot study, a second human subject study was used to validate the simulator as an effective tool for replicating the ROR experience with the additional benefit of receiving insight into driver reactions to ROR. Analysis of variance results of subjective questionnaire data and objective performance evaluation parameters showed strong correlations to ROR crash data and previous ROR study conclusions. In particular, higher vehicle velocities, curved roads, and higher friction coefficient differences between the road and shoulder all negatively impacted drivers\u27 recoveries from ROR. The only non-significant impact found was that of the roadway edge, indicating a possible limitation of the simulator system with respect to that particular environment variable. The validation study provides a foundation for further evaluation and development of a simulator based ROR recovery training program to help equip drivers with the skills to safely recognize and recover from this dangerous and often deadly scenario. Finally, building on the findings of the pilot study and validation study, a total of 75 individuals participated in a pre-post experiment to examine the effect of a training video on improving driver performance during a set of simulated ROR scenarios (e.g., on a high speed highway, a horizontal curve, and a residential rural road). In each scenario, the vehicle was unexpectedly forced into an ROR scenario for which the drivers were instructed to recover as safely as possible. The treatment group then watched a custom ROR training video while the control group viewed a placebo video. The participants then drove the same simulated ROR scenarios. The results suggest that the training video had a significant positive effect on drivers\u27 steering response on all three roadway conditions as well as improvements in vehicle stability, subjectively rated demand on the driver, and self-evaluated performance in the highway scenario. Under the highway conditions, 84 percent of the treatment group and 52 percent of the control group recovered from the ROR events. In total, the treatment group recovered from the ROR events 58 percent of the time while the control group recovered 45 percent of the time. The results of this study suggest that even a short video about recovering from ROR events can significantly influence a driver\u27s ability to recover. It is possible that additional training may have further benefits in recovering from ROR events

    Steering control for haptic feedback and active safety functions

    Get PDF
    Steering feedback is an important element that defines driver–vehicle interaction. It strongly affects driving performance and is primarily dependent on the steering actuator\u27s control strategy. Typically, the control method is open loop, that is without any reference tracking; and its drawbacks are hardware dependent steering feedback response and attenuated driver–environment transparency. This thesis investigates a closed-loop control method for electric power assisted steering and steer-by-wire systems. The advantages of this method, compared to open loop, are better hardware impedance compensation, system independent response, explicit transparency control and direct interface to active safety functions.The closed-loop architecture, outlined in this thesis, includes a reference model, a feedback controller and a disturbance observer. The feedback controller forms the inner loop and it ensures: reference tracking, hardware impedance compensation and robustness against the coupling uncertainties. Two different causalities are studied: torque and position control. The two are objectively compared from the perspective of (uncoupled and coupled) stability, tracking performance, robustness, and transparency.The reference model forms the outer loop and defines a torque or position reference variable, depending on the causality. Different haptic feedback functions are implemented to control the following parameters: inertia, damping, Coulomb friction and transparency. Transparency control in this application is particularly novel, which is sequentially achieved. For non-transparent steering feedback, an environment model is developed such that the reference variable is a function of virtual dynamics. Consequently, the driver–steering interaction is independent from the actual environment. Whereas, for the driver–environment transparency, the environment interaction is estimated using an observer; and then the estimated signal is fed back to the reference model. Furthermore, an optimization-based transparency algorithm is proposed. This renders the closed-loop system transparent in case of environmental uncertainty, even if the initial condition is non-transparent.The steering related active safety functions can be directly realized using the closed-loop steering feedback controller. This implies, but is not limited to, an angle overlay from the vehicle motion control functions and a torque overlay from the haptic support functions.Throughout the thesis, both experimental and the theoretical findings are corroborated. This includes a real-time implementation of the torque and position control strategies. In general, it can be concluded that position control lacks performance and robustness due to high and/or varying system inertia. Though the problem is somewhat mitigated by a robust H-infinity controller, the high frequency haptic performance remains compromised. Whereas, the required objectives are simultaneously achieved using a torque controller

    Perception architecture exploration for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions
    • …
    corecore