HAL-CentraleSupelec
Not a member yet
74818 research outputs found
Sort by
Critical safety risks for passengers onboard level 4 automated shuttles in Europe: mitigation strategies and public policy implications
A core value proposition of driverless automated vehicles (AVs) is reducing road accidents largely attributed to human errors and increasing traffic safety. Nonetheless, safety remains a foremost concern in the adoption of AVs. This paper enriches the academic and policy debate on driverless (level 4) AV safety for onboard passengers. We conduct semi-structured interviews with 47 Connected Cooperative Automated Mobility (CCAM) experts from diverse sectors and 11 European countries for insights into their views and opinions on the critical safety issues of driverless AV and possible mitigation strategies in Europe. Then, we conduct data analysis using reflexive thematic analysis. We find that the critical safety issues are injury, accident or death of passengers, adverse weather/environmental conditions, cybersecurity issues, perceived safety risks, and AV functional failure. We argue that the safety risks at specific locations in the extant literature are interlinked and are generalisable in the European context. The key mitigation strategies are monitoring in-vehicle conditions, designing AVs for functional safety, increasing road testing to improve AV perception and sensing technologies, user education and communication about AV, support from road infrastructure and V2X technologies. The other mitigation strategies are facilitating stakeholder collaboration, knowledge, and data sharing, enacting/enforcing safety standards and regulations, and separating AVs from human drivers. Then, we analyse the mitigation strategies using five governance policy steering instruments to understand workable public policy approaches to support policymaking on driverless (level 4) AV in Europe. We argue that a combination of governing by enabling and governing by authority policy steering instruments could support mitigating the critical safety risks of level 4 AVs. We argue that these policy steering instruments could support mitigating the critical safety risks of level 4 AVs and play a key role in supporting driverless AVs' safe integration into transportation systems and the transition to a connected, cooperative, automated mobility future in Europe.</div
Spectro-photometry of Phobos simulants, II: Effects of porosity and texture
International audienceSurface porosity and texture has been found to be an important property for small bodies. Some asteroids and comets can exhibit an extremely high surface porosity in the first millimeter layer. This layer may be produced by various processes and maintained by the lack of an atmosphere. However, the influence of porosity on the spectro-photometric properties of small body surfaces is not yet fully understood.In this study, we looked into the effect of the texture on the spectro-photometric properties of Phobos regolith spectroscopic simulants. Macro- and micro-porosity were created by mixing the simulants with ultra-pure water, producing ice-dust particles, and then sublimating the water. The sublimation of the water ice enabled the production of porous and rough powdered simulants with significant micro- and macro-porosity associated with macro-roughness. The reflectance spectroscopic properties in the visible and near-infrared (0.5–4.2m) demonstrate a brightening of the porous samples in comparison to the compact ones. One simulant exhibits a bluing of the spectral slope after increasing porosity, which is likely linked to the presence of expandable phyllosilicates. In the mid-infrared range, a contrast increase of the 10 m emissivity-related plateau due to silicates is observed. This spectral feature is typically observed as a 10m emissivity plateau on some asteroids, making the mid-infrared region important for assessing mineralogy and surface texture.Photometry reveals a modification of the phase reddening behavior between the compact powder and the sublimation residue for both simulants. However, the observed behavior is different between the simulants, suggesting that the phase reddening may be dependent on the composition of the simulants. The phase curves of the sublimation residues exhibit a higher contribution of forward scattering. The derivation of the Hapke parameters indicates an increase in roughness for the porous sample, but no significant modification of the opposition effect. The modifications of the spectrophotometric properties observed in this experiment are definitely due to the textural changes obtained after sublimation, which depend on the initial composition of the simulants.This study aims to provide new insights into the understanding of porosity by using two Phobos simulants in the context of the upcoming JAXA/Martian Moons eXploration mission. We suggest that the Phobos blue unit may be due to the presence of a highly porous layer, rather than only to space-weathering processes, as often postulated
Position: Causal Machine Learning Requires Rigorous Synthetic Experiments for Broader Adoption
Causal machine learning has the potential to revolutionize decision-making by combining the predictive power of machine learning algorithms with the theory of causal inference. However, these methods remain underutilized by the broader machine learning community, in part because current empirical evaluations do not permit assessment of their reliability and robustness, undermining their practical utility. Specifically, one of the principal criticisms made by the community is the extensive use of synthetic experiments. We argue, on the contrary, that synthetic experiments are essential and necessary to precisely assess and understand the capabilities of causal machine learning methods. To substantiate our position, we critically review the current evaluation practices, spotlight their shortcomings, and propose a set of principles for conducting rigorous empirical analyses with synthetic data. Adopting the proposed principles will enable comprehensive evaluations that build trust in causal machine learning methods, driving their broader adoption and impactful real-world use
Regulating TSO interaction in bid filtering for European balancing markets
International audienceEurope is undertaking projects for near real-time common balancing markets to meet the flexibility needs induced by renewable deployment. A new congestion management method, bid filtering, has been authorized by regulation to prevent unsolvable last minute congestion. It is designed to manage internal congestion and is performed by each Transmission System Operator (TSO) separately without knowledge of bids in other zones. Bids from all zones are shared in the same market, which means filtering from one TSO could affect welfare in other zones, depending on its objective and on regulation. This paper evaluates the potential effects of multiple TSOs interacting with different filtering strategies. Three TSO strategies are considered – Benevolent, Local, and Conservative – and different combinations are tested using multi-agent reinforcement learning. Results show that although several TSOs filtering benevolently leads to the highest net Social Welfare, it is unlikely that all TSOs will adopt this strategy considering political and social constraints in EU27 countries. We discuss several regulatory options to create the conditions for a Social Welfare-maximizing filtering and foster coordination between TSOs
A Comparative Study on Ray-Tracing and Physical-Optics Methods for the Analysis of Transmitarray Antennas
International audienceThis study compares the accuracy of two semi-analytical methods based on ray tracing (RT) and physical optics (PO) for the analysis and design of transmitarrays of different sizes. The assessment considers multiple focal-to-diameter ratios and eight unit cells implementing 3-bit uniform phase quantization at 30 GHz. The results, validated by full-wave simulations, demonstrate that both the RT and PO methods yield excellent agreement in standard-profile configurations. However, the RT approach, which assumes the transmitarray to be in the far-field region of the primary feed, provides a wrong evaluation of the gain of low-profile antennas, with errors up to 1.5 dB at 34 GHz. In contrast, the PO method accounts for near-field effects and accurately predicts the transmitarray performance even when the focal distance is comparable to the wavelength. Additionally, the PO method enables the analysis of large antennas with computational efficiency comparable to RT, requiring approximately four minutes for transmitarrays with diameters of 30 wavelengths
On the Design, Fabrication and Characterization of a Miniaturized and Optically Transparent CSRRs-Loaded Antenna Operating in C-Band for a Dual Optical-RF Purpose
International audienceA miniaturized and optically transparent CSRRs-loaded (Circular Split Ring Resonator) antenna operating in C-band is presented. Two miniaturization techniques were used to get a highly miniaturized microstrip antenna. By combining high-permittivity substrate with CSRRs-loaded ground plane, a reduction size over 70% was then obtained. The antenna dimension equals 0.1λ at fr=5.3 GHz with a −3.9dBi simulated gain. Optical transparency is achieved by printing a micrometric mesh metal films with pitch of 300 μm and metal strip width of 10 μm on both sides of sapphire substrate. Homemade prototypes were fabricated and characterized at microwaves: the operating frequency and radiation pattern were measured experimentally. Snapshots from an endoscopic camera were taken behind the transparent antenna as proof of concept. A new dual optical-RF application has therefore been developed
Toward a consensus on extended Shapley values for multi-choice games
International audienc
Comparaison de techniques automatiques pour l’évaluation du torque du cerveau et de l’endocrâne
International audienceL’étude de l’évolution du cerveau humain, qui ne fossilise pas, s’effectue traditionnellement par l’analyse de l’endocrâne, moulage physique ou, plus communément aujourd’hui, numérique, de l’intérieur de la boite crânienne. Il s’agit du seul moyen d’obtenir une indication sur la forme, la taille, ou la structure du cerveau. D’un intérêt particulier sont les asymétries anatomiques du cerveau, qui reflètent possiblement les asymétries fonctionnelles de celui-ci, en particulier la latéralité manuelle ou celle des zones du langage, dont on pense qu’elles sont en grand partie uniques aux humains et dont on se demande à quel moment dans l’évolution humaine elles sont apparues. À défaut de pouvoir observer directement les asymétries internes du cerveau, le "torque" a souvent été utilisé comme marqueur indirect de cette asymétrie. Malheureusement, la littérature sur ce sujet (existence, caractérisation, spécificité chez les humains) est contrastée, en particulier en raison de l’hétérogénéité des méthodes employées. Dans le cadre du projet PaleoBRAIN, nous avons acquis des images IRM sur 75 sujets humains afin de pouvoir segmenter à la fois le cerveau et l’intérieur de la boite crânienne – analogue à un endocrâne numérique qui aurait été obtenu à partir d’une image scanographique – sur chaque individu. Disposant ainsi à la fois des images en niveaux de gris et des surfaces extraites, nous avons pu évaluer comparativement plusieurs méthodes permettant l’estimation du torque dans ces deux types de données. Nous avons également pu comparer des méthodes estimant le torque en superposant les données individuelles à un atlas parfaitement symétrique, biaisant possiblement le résultat obtenu, à d’autres méthodes calculant un tel atlas, mais à partir des données elles-mêmes. Enfin, en comparant l’estimation du torque sur la surface corticale à celle obtenue sur l’endocrâne numérique, nous avons pu valider l’utilisation de l’endocrâne comme substitut valable au cerveau pour évaluer cette caractéristique anatomique
Two-Stage Stochastic Resilience Optimization of Converter Stations Under Uncertain Mainshock-Aftershock Sequences
International audienceReducing vulnerability and enhancing resilience of power systems, particularly converter stations, against earthquakes are crucial for maintaining their safe operation. Nevertheless, earthquakes are often accompanied by aftershocks and encompass plenty of uncertainties, posing significant challenges for the development of a joint pre-earthquake preparation and post-earthquake restoration strategy. In this article, a novel two-stage stochastic programming model is put forth to bolster the resilience of converter stations under uncertainties associated with mainshock-aftershock sequences. The model is bifurcated into two pillars: the first stage focuses on designing equipment hardening strategy (EHS) and spare parts strategy (SPS) for converter stations before earthquakes. The second stage is dedicated to the optimization of recovery scheduling (RS) in the wake of mainshock-aftershock sequences. To mitigate the endogenous uncertainties of the model, a tolerated random number generation method is employed to generate deterministic scenarios in the second stage. The resulting optimization model is, then, efficiently solved using a combination of the sample average approximation and progressive hedging algorithm. To evaluate the performance of the proposed resilience optimization method, comparative studies are carried out on a real ±800 kV converter station located in the city of Yibin, Sichuan Province, P.R. China. Results from this application demonstrate that the EHS pre-dominantly focuses on hardening the equipment of the same circuit to guarantee the low transmission capacity under diverse earthquake scenarios, while the SPS is proven to be more effective than the EHS in scenarios with high values of peak ground acceleration (PGA) of mainshock and occurrence intensity of aftershocks
LEMUR: Latent EM Unsupervised Regression for Sparse Inverse Problems
International audienceMost methods for sparse signal recovery require toset one or several hyperparameters. We propose an unsupervisedmethod to estimate the parameters of a Bernoulli-Gaussian (BG)model describing sparse signals. The proposed method is firstderived for denoising problems, based on a maximum likelihood(ML) approach. Then, an extention to general inverse problems isachieved through a latent variable formulation. Two expectation-maximization (EM) algorithms are then proposed to estimate thesignal together with the BG model parameters. Combining thesetwo approaches leads to the proposed LEMUR algorithm. Allproposed algorithms are then evaluated on extensive simulationsin terms of ability to recover the parameters and provide accuratesparse signal estimates