INRIA a CCSD electronic archive server
Not a member yet
    120044 research outputs found

    Bigraded Castelnuovo-Mumford regularity and Groebner bases

    No full text
    International audienceWe study the relation between the bigraded Castelnuovo-Mumford regularity of abihomogeneous ideal II in the coordinate ring of the product of two projective spaces and the bidegrees of a Groebner basis of II with respect to the degree reverse lexicographical monomial order in generic coordinates. For the single-graded case, Bayer and Stillman unraveled all aspects of this relationship forty years ago and these results led to complexity estimates for computations with Groebner bases. We build on this work to introduce a bounding region of the bidegrees of minimal generators of bihomogeneous Groebner bases for II. We also use this region to certify the presence of some minimalgenerators close to its boundary. Finally, we show that, up to a certain shift, this region is related to the bigraded Castelnuovo-Mumford regularity of II

    An Autoethnography on Visualization Literacy: A Wicked Measurement Problem

    No full text
    International audienceWe contribute an autoethnographic reflection on the complexity of defining and measuring visualization literacy (i.e., the ability to interpret and construct visualizations) to expose our tacit thoughts that often exist in-between polished works and remain unreported in individual research papers. Our work is inspired by the growing number of empirical studies in visualization research that rely on visualization literacy as a basis for developing effective data representations or educational interventions. Researchers have already made various efforts to assess this construct, yet it is often hard to pinpoint either what we want to measure or what we are effectively measuring. In this autoethnography, we gather insights from 14 internal interviews with researchers who are users or designers of visualization literacy tests. We aim to identify what makes visualization literacy assessment a “wicked” problem. We further reflect on the fluidity of visualization literacy and discuss how this property may lead to misalignment between what the construct is and how measurements of it are used or designed. We also examine potential threats to measurement validity from conceptual, operational, and methodological perspectives. Based on our experiences and reflections, we propose several calls to action aimed at tackling the wicked problem of visualization literacy measurement, such as by broadening test scopes and modalities, improving test ecological validity, making it easier to use tests, seeking interdisciplinary collaboration, and drawing from continued dialogue on visualization literacy to expect and be more comfortable with its fluidity

    Turnpike property of linear quadratic control problems with unbounded control operators

    No full text
    International audienceWe establish the turnpike property for linear quadratic control problems for which the control operator is admissible and may be unbounded, under quite general and natural assumptions. The turnpike property has been well studied for bounded control operators, based on the theory of differential and algebraic Riccati equations. For unbounded control operators, there are only few results, limited to some special cases of hyperbolic systems in dimension one or to analytic semigroups. Our analysis is inspired by the pioneering work of Porretta and Zuazua \cite{PZ13}. We start by approximating the admissible control operator with a sequence of bounded ones. We then prove the convergence of the approximate problems to the initial one in a suitable sense. Establishing this convergence is the core of the paper. It requires to revisit in some sense the linear quadratic optimal control theory with admissible control operators, in which the roles of energy and adjoint states, and the connection between infinite-horizon and finite-horizon optimal control problems with an appropriate final cost are investigated

    Intermediation Platforms and Geopolitical Asymmetries, Lessons from a Pandemic

    No full text
    International audienceTo a large extent, and particularly before vaccines became available, human societies owed their resilience during the COVID-19 pandemic to non-pharmaceutical interventions, i.e. social distancing combined with more invasive digital systems. In this paper, we consider the digital applications developed during 2020-2021, the two first years of the pandemic. We introduce a typology based on the services offered and the data flows they require, between both public and private actors. A detailed timeline of these developments shows that countries’ strategies have evolved in strong coherence with their overall digital policy. Our study demonstrates that this exogenous crisis has reinforced the critical role of intermediation platforms for maintaining society’s essential functions. Their increased centrality has contributed to intensifying information asymmetries and power imbalances, already at stake before the pandemic, both between platforms and states, as well as between countries, leading to new geopolitical equilibria

    Harnessing ecological niche modeling of Listeria monocytogenes for biopreservation system engineering

    No full text
    International audienceBiopreservation is a microbiome engineering technology based on the use of microorganisms as protective cultures and/or their metabolites, which can be used to mitigate the presence of pathogens in food. This study explores the potential of ecological niche modeling to guide the selection of biopreservation candidates. A luminescent strain of Listeria monocytogenes was utilized in a multivariate high-throughput competition assay, assessing a combination of abiotic factors (i.e. glucose, NaCl, pH in a factorial design) and biotic variables (i.e. various competing microorganisms). The resulting data were analyzed using two parallel methods: k-means clustering and Response Surface Modeling (RSM). Integrating the outputs of these approaches allowed for grouping competitors based on both inhibition strength and niche modeling characteristics. Competitors were categorized into five groups, distinguished by their inhibition levels against L. monocytogenes and the shape of their response surfaces, with some groups displaying complementary features. Weighted Niche Reduction (WNR) calculations derived from model predictions identified the strain combination Carnobacterium maltaromaticum CP14 and Leuconostoc pseudomesenteroides PTF6 as having enhanced inhibitory properties. This study highlights promising possibilities for the bottom-up engineering of synthetic communities for biopreservation applications

    Unsupervised anomaly detection in brain FDG PET with deep generative models: An experimental analysis of model variability and mitigation strategies

    No full text
    International audienceUnsupervised anomaly detection allows identifying anomalies from unlabeled data, making it useful for neuroimaging analysis and computer-aided diagnosis. Given an individual's scan, we use a generative model to construct a subject-specific image of healthy appearance and compare both images to spot anomalies. Designing anomaly maps in such way has drawbacks as the reconstructions are imperfect and some variability is not taken into account. We study model variability arising from using different random seeds during training and explore solutions to mitigate the effect of unwanted reconstruction errors and variability. Our experiments on 3D brain FDG PET scans from ADNI suggest that variance between models can be reduced by aggregating their reconstructions in a Z-score based anomaly map, or normalizing the anomaly map with a healthy validation set

    Consolidation of virtual machines to reduce energy consumption of data centers by using ballooning, sharing and swapping mechanisms

    No full text
    International audienceData centers have major environmental impacts due to their energy consumption and the manufacturing of equipment. They emit greenhouse gases and consume energy and resources, such as rare earth and water. Efficient computing resource management is therefore a key challenge for Cloud service providers today as they need to meet a growing demand while limiting the oversizing of their infrastructures. Mechanisms derived from virtualization, such as Virtual Machines (VMs) consolidation, are used to optimize resource management and infrastructure sizing, but economic and technical constraints can hinder their adoption. They require prior infrastructure knowledge and usage study to evaluate their potential, involve complex placement algorithms, and are sometimes difficult to implement in hypervisors. In this paper, we propose ORCA (OuR Consolidation Algorithm), a complete consolidation methodology designed to facilitate the production implementation of such mechanisms. This methodology includes the study of VM usage, the use of prediction models, and a VM placement algorithm that takes advantage of resource oversubscription. The choice of relevant oversubscription ratios is also addressed, with a focus on memory overcommitment through the study of memory overcommitment mechanisms:ballooning, page sharing, and swapping. Results from a detailed simulation process and deployment on a production infrastructure are presented. The methodology is tested in simulation on two production infrastructure datasets, with power consumption reduction as high as 29.8% and without consolidation error. The production deployment using VMWare vSphere and considering fault tolerance requirements reduces the energy consumption by 6.12% without causing any performance degradation

    Comparing Longitudinal Preprocessing Pipelines for Brain Volume Consistency in T1-Weighted MRI Test-Retest Scans

    No full text
    International audienceNeurodegenerative diseases require longitudinal assessment to track disease progression, with brain volume change from T1-weighted MRI serving as a key biomarker that demands robust and precise processing methods. Although several longitudinal preprocessing pipelines exist, there is no consensus on which offers the highest reliability. In this study, we evaluate six widely used open-source tools for cross-sectional and longitudinal preprocessing of T1-weighted MRI: FreeSurfer, SAMSEG, ANTs, ANTsPyNet, SPM12, and CAT12. We assess their robustness using test-retest data from the MIRIAD cohort, in which no meaningful anatomical change is expected between repeated scans. Our results show that, overall, longitudinal preprocessing methods demonstrate greater robustness than their cross-sectional counterparts. However, this pattern is not consistent across all tools: some longitudinal implementations do not outperform their cross-sectional versions, and the magnitude of improvement varies by method and brain region. We conclude that while the existing longitudinal preprocessing approaches can improve consistency in brain volume estimation, these benefits are method-dependent

    Positivity proofs for linear recurrences through contracted cones

    No full text
    International audienc

    Efficient and Scalable Search for Statistics

    No full text
    International audienceInformed public debate needs high-quality data. In this context, high-quality statistical data sources are a valuable category of reference information based on which a claim can be checked. To facilitate the work of journalists or other fact-checkers, users’ questions about a specific claim should be automatically answered based on statistical tables. This task is complicated by the large number, size, and variety of statistical datasets.We introduce the statistical table discovery problem (STD, in short), which aims, given a natural language question and a set of statistic datasets (multidimensional tables), to find the tables most relevant for the question. We then describe STAR, an algorithm for solving the STD problem. Unlike existing table discovery (TD) solutions aimed at relational tables, STAR is devised specifically for multidimensional ones. Further, STAR treats the space and time dimensions of statistical datasets separately. We experimentally show that these features, together, make STAR outperform state-of-the-art TD systems adapted to the STD problem, in terms of scalability, search quality, preprocessing and question answering time

    59,760

    full texts

    120,049

    metadata records
    Updated in last 30 days.
    INRIA a CCSD electronic archive server is based in France
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇