80,359 research outputs found

    Litigating Partial Autonomy

    Get PDF
    Who is responsible when a semi-autonomous vehicle crashes? Automobile manufacturers claim that because Advanced Driver Assistance Systems (ADAS) require constant human oversight even when autonomous features are active, the driver is always fully responsible when supervised autonomy fails. This Article argues that the automakers’ position is likely wrong both descriptively and normatively. On the descriptive side, current products liability law offers a pathway toward shared legal responsibility. Automakers, after all, have engaged in numerous marketing efforts to gain public trust in automation features. When drivers’ trust turns out to be misplaced, drivers are not always able to react in a timely fashion to re-take control of the car. In such cases, the automaker is likely to face primary liability, perhaps with a reduction for the driver’s comparative fault. On the normative side, this Article argues that the nature of modern semi-autonomous systems requires the human and machine to engage in a collaborative driving endeavor. The human driver should not bear full liability for the harm arising from this shared responsibility. As lawsuits involving partial autonomy increase, the legal system will face growing challenges in incentivizing safe product development, allocating liability in line with fair principles, and leaving room for a nascent technology to improve in ways that, over time, will add substantial safety protections. The Article develops a framework for considering how those policy goals can play a role in litigation involving autonomous features. It offers three key recommendations, including (1) that courts consider collaborative driving as a system when allocating liability; (2) that the legal system recognize and encourage regular software updates for vehicles, and (3) that customers pursue fraud and warranty claims when manufacturers overstate their autonomous capabilities. Claims for economic damages can encourage manufacturers to internalize the cost of product defects before, rather than after, their customers suffer serious physical injury

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun

    An ensemble deep learning approach for driver lane change intention inference

    Get PDF
    With the rapid development of intelligent vehicles, drivers are increasingly likely to share their control authorities with the intelligent control unit. For building an efficient Advanced Driver Assistance Systems (ADAS) and shared-control systems, the vehicle needs to understand the drivers’ intent and their activities to generate assistant and collaborative control strategies. In this study, a driver intention inference system that focuses on the highway lane change maneuvers is proposed. First, a high-level driver intention mechanism and framework are introduced. Then, a vision-based intention inference system is proposed, which captures the multi-modal signals based on multiple low-cost cameras and the VBOX vehicle data acquisition system. A novel ensemble bi-directional recurrent neural network (RNN) model with Long Short-Term Memory (LSTM) units is proposed to deal with the time-series driving sequence and the temporal behavioral patterns. Naturalistic highway driving data that consists of lane-keeping, left and right lane change maneuvers are collected and used for model construction and evaluation. Furthermore, the driver's pre-maneuver activities are statistically analyzed. It is found that for situation-aware, drivers usually check the mirrors for more than six seconds before they initiate the lane change maneuver, and the time interval between steering the handwheel and crossing the lane is about 2 s on average. Finally, hypothesis testing is conducted to show the significant improvement of the proposed algorithm over existing ones. With five-fold cross-validation, the EBiLSTM model achieves an average accuracy of 96.1% for the intention that is inferred 0.5 s before the maneuver starts

    Promising State Policies for Personalized Learning

    Get PDF
    This report is a valuable resource for state policymakers—whether they are seeking to create conditions in state policy to support personalized learning, moving forward with initiatives to develop personalized learning pilot programs, hosting task forces to explore policy issues and needs, or taking a comprehensive policy approach for supporting advanced personalized learning models.Personalized learning is where instruction is tailored to each student's strengths, needs, and interests—including enabling student voice and choice in what, how, when, and where they learn—to provide flexibility and supports to ensure mastery of the highest standards possible

    Cities Building Community Wealth

    Get PDF
    As cities struggle with rising inequality, widespread economic hardship, and racial disparities, something surprising and hopeful is also stirring. In a growing number of America's cities, a more inclusive, community-based approach to economic development is being taken up by a new breed of economic development professionals and mayors. This approach to economic development could be on the cusp of going to scale. It's time it had a name. We call it community wealth building

    A novel on-board Unit to accelerate the penetration of ITS services

    Get PDF
    In-vehicle connectivity has experienced a big expansion in recent years. Car manufacturers have mainly proposed OBU-based solutions, but these solutions do not take full advantage of the opportunities of inter-vehicle peer-to-peer communications. In this paper we introduce GRCBox, a novel architecture that allows OEM user-devices to directly communicate when located in neighboring vehicles. In this paper we also describe EYES, an application we developed to illustrate the type of novel applications that can be implemented on top of the GRCBox. EYES is an ITS overtaking assistance system that provides the driver with real-time video fed from the vehicle located in front. Finally, we evaluated the GRCbox and the EYES application and showed that, for device-to-device communication, the performance of the GRCBox architecture is comparable to an infrastructure network, introducing a negligible impact
    • 

    corecore