41 research outputs found

    Automatic Text Simplification of News Articles in the Context of Public Broadcasting

    Full text link
    This report summarizes the work carried out by the authors during the Twelfth Montreal Industrial Problem Solving Workshop, held at Universit\'e de Montr\'eal in August 2022. The team tackled a problem submitted by CBC/Radio-Canada on the theme of Automatic Text Simplification (ATS)

    Spatial analysis of the glioblastoma proteome reveals specific molecular signatures and markers of survival

    Full text link
    Molecular heterogeneity is a key feature of glioblastoma that impedes patient stratification and leads to large discrepancies in mean patient survival. Here, we analyze a cohort of 96 glioblastoma patients with survival ranging from a few months to over 4 years. 46 tumors are analyzed by mass spectrometry-based spatially-resolved proteomics guided by mass spectrometry imaging. Integration of protein expression and clinical information highlights three molecular groups associated with immune, neurogenesis, and tumorigenesis signatures with high intra-tumoral heterogeneity. Furthermore, a set of proteins originating from reference and alternative ORFs is found to be statistically significant based on patient survival times. Among these proteins, a 5-protein signature is associated with survival. The expression of these 5 proteins is validated by immunofluorescence on an additional cohort of 50 patients. Overall, our work characterizes distinct molecular regions within glioblastoma tissues based on protein expression, which may help guide glioblastoma prognosis and improve current glioblastoma classification

    Systematic literature review of determinants of sedentary behaviour in older adults:a DEDIPAC study

    Get PDF
    BACKGROUND: Older adults are the most sedentary segment of society and high sedentary time is associated with poor health and wellbeing outcomes in this population. Identifying determinants of sedentary behaviour is a necessary step to develop interventions to reduce sedentary time. METHODS: A systematic literature review was conducted to identify factors associated with sedentary behaviour in older adults. Pubmed, Embase, CINAHL, PsycINFO and Web of Science were searched for articles published between 2000 and May 2014. The search strategy was based on four key elements: (a) sedentary behaviour and its synonyms; (b) determinants and its synonyms (e.g. correlates, factors); (c) types of sedentary behaviour (e.g. TV viewing, sitting, gaming) and (d) types of determinants (e.g. environmental, behavioural). Articles were included in the review if specific information about sedentary behaviour in older adults was reported. Studies on samples identified by disease were excluded. Study quality was rated by means of QUALSYST. The full review protocol is available from PROSPERO (PROSPERO 2014: CRD42014009823). The analysis was guided by the socio-ecological model framework. RESULTS: Twenty-two original studies were identified out of 4472 returned by the systematic search. These included 19 cross-sectional, 2 longitudinal and 1 qualitative studies, all published after 2011. Half of the studies were European. The study quality was generally high with a median of 82 % (IQR 69-96 %) using Qualsyst tool. Personal factors were the most frequently investigated with consistent positive association for age, negative for retirement, obesity and health status. Only four studies considered environmental determinants suggesting possible association with mode of transport, type of housing, cultural opportunities and neighbourhood safety and availability of places to rest. Only two studies investigated mediating factors. Very limited information was available on contexts and sub-domains of sedentary behaviours. CONCLUSION: Few studies have investigated determinants of sedentary behaviour in older adults and these have to date mostly focussed on personal factors, and qualitative studies were mostly lacking. More longitudinal studies are needed as well as inclusion of a broader range of personal and contextual potential determinants towards a systems-based approach, and future studies should be more informed by qualitative work

    The Crisis of Social Categories in the Age of AI

    No full text
    This article explores the change in calculation methods induced by deep learning techniques. While more traditional statical methods are based on well instituted categories to measure the social world, these categories are today denounced as a set of hardened and abstract conventions that are incapable of conveying the complexification of social life and the singularities of individuals. Today AI models try to overcome some criticism raised by rigid social categories by combining a "spatial and temporal expansion" of the data space, producing a global transformation of the calculation methods

    “The displacement of reality tests: The selection of individuals in the age of machine learning”: Workshop “People Like You: A New Political Arithmetic”, University of Warwick

    No full text
    Rather than juries, committees, judges, or recruiters, can we use calculation techniques to select the best candidates? Some proponents of these tools consider human decisions to be so burdened by prejudices, false expectations, and confirmation biases that it is much wiser and fairer to trust a calculation (Kleinberg et al., 2019). On the other hand, an abundant literature shows that others are concerned about the risk that implementing decision automation could lead to systemic discrimination (Pasquale, 2015; O’Neil, 2016; Eubanks, 2017; Noble, 2018). While algorithmic calculations are increasingly used in widely different contexts (music recommendations, targeted advertising, information categorization, etc.), this question takes a very specific turn when calculations are introduced into highly particular spaces in our societies: the devices used to select and rank candidates in view of obtaining a qualification or a rare resource. Following the terminology of Luc Boltanski and Laurent ThĂ©venot (1991), these selection situations constitute a particular form of reality tests. Backed by institutions or organizations granting them a certain degree of legitimacy, a disparate set of classification tests has become widespread in our societies with the development of procedures of individualization, auditing, and competitive comparison (Espeland, Saunder, 2016; Power, 1997). Some of these situations are extremely formalized and ritualized, whereas others are hidden in flows of requests and applications addressed to the government, companies, or a variety of other organizations. With the bureaucratization of society (Hibou, 2012), we are filling out an increasing number of forms and files in order to access rights or resources, and subsequently to await a decision which, in many social spaces, is now based on a calculation.The stability of these selection devices is still uncertain. The way that choices are made, the principles justifying them, the possibility of being granted a privilege, the relevance of the categories selected to organize files, respect for candidate diversity, or monitoring of the equality of applicants are constantly used to fuel criticism that challenges tests and condemns their lack of fairness. In this reflective text, we draw on a document-based survey of the various selection devices employed by the French government or companies to offer a conceptual interpretation of the impact that machine learning techniques have on selection tests. The hypothesis that we wish to explore is that we are witnessing a displacement of the format of classification tests made possible by a spectacular expansion in the candidate comparison space and the implementation of machine learning techniques. However, this displacement is more than just a consequence of the introduction of technological innovation contributed by big data and artificial intelligence. Its justification in the eyes of the institutions and organizations that order selection tests is based on the claim that this new test format to takes into account the multiple criticisms that our societies have constantly raised against previous generations of tests. This is why we propose to interpret the interest in and development of these automated procedures as a technocratic response to the development of criticism of the categorical representation of society, which is developing as a result of the individualization and subjectification processes throughout our societies (Cardon, 2019).The conceptual framework outlined in this text links four parallel lines of analysis. The first is related to the transformation dynamics of selection tests in our societies. To shine light on it, we propose a primitive formalization of four types of selection test, which we call performance tests, form tests, file tests, and continuous tests. The progressive algorithmization of these tests – with continuous tests constituting the horizon promised by artificial intelligence – is based on an expansion in the space of the data used in decision making which, through successive linkages, involves actors from increasingly diverse and distant spatialities and temporalities. To allay the criticism levelled at previous tests, the expansion of the comparison space allows candidates’ files to have a different form, by increasing the number of data points in the hopes of better conveying their singularities. The justifications provided to contain criticism increase in generality and are formalized through the implementation of a new test that groups these new entities together in the “technical folds” (Latour, 1999) formed by the computation of machine learning. These folds themselves then become standardized and integrated into a new comparison space. However, new criticism can rapidly emerge, once again leading to a dynamic of expansion of candidates’ comparison space. This dynamic process of test displacement under the effect of criticism is applicable to many types of reality test (Boltanski, Chiapello, 1999), but in this article we will pay attention to “selection” tests during which a device must choose and rank candidates within an initial population.Closely related to the first, the second line of analysis in this article is related to the process of spatial-temporal expansion of the data space used to automate decisions (the comparison space). The development of algorithmic calculation and, more generally, artificial intelligence, enables the emergence of a continuous expansion of devices allowing for the collection of data necessary for calculation (Crawford, 2021). This process has two components: it is first and foremost spatial, spanning a network of sensors that increasingly continuously cling to people’s life paths by means of various affordances; secondly, it is temporal, given that a new type of selection test has the particularity of being supported by the probability of the occurrence of a future event, to organize the contributions of past data. The displacement of selection tests thus constantly expands the comparison space, not only by increasing the number of variables and diversifying them, but also by re-agencing the temporal structure of the calculation around the optimization of a future objective.The third line of analysis studied here looks at the change in calculation methods and, more specifically, the use of deep learning techniques – previously called artificial intelligence – to model an objective based on multidimensional data (Cardon & al., 2018). This displacement within statistical methods, that is, the transition from linear models to non-linear machine learning techniques like deep learning, is the instrument of the test displacement dynamic. By radically shifting calculation towards inductive methods, it transforms the possibility of using initial variables to make sense of and explain test decisions. The fourth and more general line of analysis looks at the way of justifying (through principles) and legitimizing (through institutional stabilization authorities) the principles on which calculations base selection. The dynamic that this text seeks to highlight reveals a displacement in the legitimization method required if tests are to become widely accepted. Whereas traditional selection tests draw on government resources and a compromise between meritocracy and egalitarianism, the new tests that orient classifications use an increasing amount of data that is not certified by institutions; rather, this data is collected by private actors, and is therefore the responsibility of individuals and contingent on their behaviour. Hence, the justification of tests is based less on the preservation of the current state of society (maintaining equality in a population, distributing in accordance with categories recognized by all, rewarding merit) than on an undertaking to continuously transform society, which is one of the features of the competition between individuals that neoliberalism fosters

    The displacement of reality tests. The selection of individuals in the age of machine learning

    No full text
    International audienceThis article presents an interpretation of the transformation of selection tests in our societies, such as competitive examinations, recruitment or competitive access to goods or services, based on the opposition between reality and world proposed by Luc Boltanski in On Critique. We would like to explore the change in the format of these selection tests. We argue that this change is made possible by a spectacular enlargement of the space for comparisons between candidates and by the implementation of machine learning techniques. But this shift is not the only and simple consequence of the introduction of the technological innovation brought by massive data and artificial intelligence. It finds justification in the institutions and organizations that order selection tests because this new test format claims to absorb the multiple criticisms that our societies constantly raise against the previous generations of tests. This is why we propose to interpret the attention and the development of these automated procedures as a technocratic response to the development of a critique of the categorical representation of society

    From reality to world. A critical perspective on AI fairness

    No full text
    CNRS 2, FNEGE 1, HCERES A, ABS 3International audienceFairness of Artificial Intelligence (AI) decisions has become a big challenge for governments, companies, and societies. We offer a theoretical contribution to consider AI ethics outside of high-level and top-down approaches, based on the distinction between “reality” and “world” from Luc Boltanski. To do so, we provide a new perspective on the debate on AI fairness and show that criticism of ML unfairness is “realist”, in other words, grounded in an already instituted reality based on demographic categories produced by institutions. Second, we show that the limits of “realist” fairness corrections lead to the elaboration of “radical responses” to fairness, that is, responses that radically change the format of data. Third, we show that fairness correction is shifting to a “domination regime” that absorbs criticism, and we provide some theoretical and practical avenues for further development in AI ethics. Using an ad hoc critical space stabilized by reality tests alongside the algorithm, we build a shared responsibility model which is compatible with the radical response to fairness issues. Finally, this paper shows the fundamental contribution of pragmatic sociology theories, insofar as they afford a social and political perspective on AI ethics by giving an active role to material actors such as database formats on ethical debates. In a context where data are increasingly numerous, granular, and behavioral, it is essential to renew our conception of AI ethics on algorithms in order to establish new models of responsibility for companies that take into account changes in the computing paradigm

    Une cartographie Web de l'Ă©cosystĂšme IA en France:Qui sont les acteurs de l’écosystĂšme IA français ? Quelles relations entretiennent-ils sur le Web ?

    No full text
    Cette cartographie, sans prĂ©tendre Ă  l’exhaustivitĂ©, tente de rendre compte de la composition et de la structure relationnelle des diffĂ©rents acteurs de l’IA en France sur le Web. Elle laisse apercevoir une segmentation entre plusieurs communautĂ©s d’acteurs majeures que sont les acteurs Ă©conomiques (startups, incubateurs, etc.), les laboratoires et Ă©quipes de recherche en intelligence artificielle, ainsi que les communautĂ©s de dĂ©veloppeurs qui se retrouvent autour d'Ă©vĂ©nements (meetup) et les repository (Github) qui dessinent un rĂ©seau socio-technique d’acteurs variĂ©s (code logiciel, page de dĂ©veloppeurs, de projet, d’équipe ou d’entreprise) interconnectĂ©s entre eux

    How Big Data modifies tools for responsible AI?

    No full text
    International audienceIn the digital era, analytics based on artificial intelligence (AI) are increasingly numerous. Their outstanding achievement mark a real revolution in the way data are processed and the types of insights that can be generated (Kersting & Meyer, 2018). At the same time, we are experiencing an increasing multidisciplinary academic research about societal and ethical issues raised by algorithms, such as discrimination, opacity in the decision-making process and privacy infringement (Barocas, Selbst, 2016 ; Selbst & al., 2018) . This article provides a framework to analyze the different visions of integrating ethics in Artificial Intelligence in information systems
    corecore