27 research outputs found

    I care by...

    Get PDF
    The Care research group at the Royal College of Art (RCA) was conceived in the last week of June 2020, a month after the killing of George Floyd by police in Minnesota, an act which catalysed global protests on systemic racism and police brutality. In the UK, tens of thousands of protesters took to the streets to show solidarity with demonstrators in the US. Coinciding with the easing of the lockdown restrictions imposed to manage the coronavirus, the marches shone a light on the government’s failure to protect Black, Asian and Minority Ethnic people from the disproportionate risk posed by COVID- 19, and on the police’s increased use of stop and search in areas with large BAME populations. The pandemic has shone the harshest of lights on the question of care in the age of neoliberalism: who gets it; who needs it; who does it; who controls it. The Care research group, comprising staff and postgraduate researchers within the School of Arts and Humanities at the RCA, works in this light. Over the course of a year, as the inequalities of the virus were becoming all too clear, the group regularly came together via Zoom to reflect on: the question of how to care for the human body in the technical-patriarchal societies the virus has re-inscribed; the ‘un-doing’ of what Judith Butler describes as the binary of vulnerability and resistance; the politically-transformative potential of prioritising care (rooted in empathy, solidarity, kinship) over capitalist gain; the activation of creative research practices (including but by no means limited to writing, looking, painting, drawing, filming, performing, collecting, assembling, curating, making public) as means of caring/transforming. The group’s activities through the year of trying, failing, and trying again to care for its work and members are gathered in a co-authored Declaration of Care, published here, and expanded upon with attention to some of the methods group members developed in their research through practice. The Declaration was recited in a participatory performance with invited artist Jade Montserrat on 10 March 2021. Over the course of a two-hour webinar, participants including members of the public were invited to draw alongside Montserrat with whatever materials they had to hand as they listened to texts on the vulnerabilities of bodies, the structuring of care within institutions, and the tactile, sensory, healing qualities of creative practice. This book includes a selection of the participants’ drawings, a Reader comprising the texts that were shared, and Montserrat’s drawings created through the performance. Ahead of the performance, Montserrat delivered an address to the Care research group which looked back on a lifetime of calling for a kind of care that was never provided. Excerpts from Montserrat’s address are included here too, alongside a text and image which reflect on the group’s affective reactions to the experience of listening to it, titled Episode. The Declaration is a list of methods (approaches, processes, techniques), an enumeration of how Care research group members have worked, and would like to work: ‘I care by…’. This is a statement which has reverberated throughout the year, which bears repeating, which resounds still. Gemma Blackshaw, Care research group convenor, 2020–202

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements

    Reading tea leaves worldwide: decoupled drivers of initial litter decomposition mass‐loss rate and stabilization

    Get PDF
    The breakdown of plant material fuels soil functioning and biodiversity. Currently, process understanding of global decomposition patterns and the drivers of such patterns are hampered by the lack of coherent large‐scale datasets. We buried 36,000 individual litterbags (tea bags) worldwide and found an overall negative correlation between initial mass‐loss rates and stabilization factors of plant‐derived carbon, using the Tea Bag Index (TBI). The stabilization factor quantifies the degree to which easy‐to‐degrade components accumulate during early‐stage decomposition (e.g. by environmental limitations). However, agriculture and an interaction between moisture and temperature led to a decoupling between initial mass‐loss rates and stabilization, notably in colder locations. Using TBI improved mass‐loss estimates of natural litter compared to models that ignored stabilization. Ignoring the transformation of dead plant material to more recalcitrant substances during early‐stage decomposition, and the environmental control of this transformation, could overestimate carbon losses during early decomposition in carbon cycle models

    UbiWare: web-based dynamic data & service management platform for AmI (Poster)

    No full text
    International audienceThe surrounding space is constantly augmented by a myriadof devices that expose heterogeneous data, like slowerchangingdata, dynamic data streams and functionalities.Developing applications that cope with heterogeneous dataand diverse communication protocols is a tedious task. Thesuccess of such applications depends on the performance ofdata access and on the easy management of available data.To address these challenges, we propose UbiWare, a middlewarethat facilitates application development for ambient intelligence.We abstract the surrounding space as a databaselikeenvironment and the heterogeneous entities and devicesas data services that produce data. To query the distributeddata services and access their data, we introduce an APIthat greatly simplifies application development and is compatiblewith the different operators used by query enginesin Data Stream Management Systems or Pervasive EnvironmentManagement Systems

    P-Bench: benchmarking in data-centric pervasive application development

    No full text
    International audienceDeveloping complex data-centric applications, which manage intricate interactions between distributed and heterogeneous entities from pervasive environments, is a tedious task. In this paper we pursue the difficult objective of assessing the ”easiness” of data-centric development in pervasive environments, which turns out to be much more challenging than simply measuring execution times in performance analyses and requires highly qualified programmers.We introduce P-Bench, a benchmark that comparatively evaluates the easiness of development using three types of systems: (1) the Microsoft StreamInsight unmodified Data Stream Management System, LINQ and C#, (2) the StreamInsight++ ad hoc framework, an enriched version of StreamInsight, that meets pervasive application requirements, and (3) our SoCQ system, designed for managing data, streams and services in a unified manner. We define five tasks that we implement in the analysed systems, based on core needs for pervasive application development. To evaluate the tasks’ implementations, we introduce a set of metrics and provide the experimental results. Our study allows differentiating between the proposed types of systems based on their strengths and weaknesses when building pervasive applications

    Data Exploration with SQL using Machine Learning Techniques

    No full text
    International audienceNowadays data scientists have access to gigantic data, many of them being accessible through SQL. Despite the inherent simplicity of SQL, writing relevant and efficient SQL queries is known to be difficult, especially for databases having a large number of attributes or meaningless attribute names. In this paper, we propose a " rewriting " technique to help data scientists formulate SQL queries, to rapidly and intuitively explore their big data, while keeping user input at a minimum, with no manual tuple specification or labeling. For a user specified query, we define a negation query, which produces tuples that are not wanted in the initial query's answer. Since there is an exponential number of such negation queries, we describe a pseudo-polynomial heuristic to pick the negation closest in size to the initial query, and construct a balanced learning set whose positive examples correspond to the results desired by analysts, and negative examples to those they do not want. The initial query is reformulated using machine learning techniques and a new query, more efficient and diverse, is obtained. We have implemented a prototype and conducted experiments on real-life datasets and synthetic query workloads to assess the scalability and precision of our proposition. A preliminary qualitative experiment conducted with astrophysicists is also described

    P-Bench: benchmarking in data-centric pervasive application development

    No full text
    International audienceDeveloping complex data-centric applications, which manage intricate interactions between distributed and heterogeneous entities from pervasive environments, is a tedious task. In this paper we pursue the difficult objective of assessing the ”easiness” of data-centric development in pervasive environments, which turns out to be much more challenging than simply measuring execution times in performance analyses and requires highly qualified programmers.We introduce P-Bench, a benchmark that comparatively evaluates the easiness of development using three types of systems: (1) the Microsoft StreamInsight unmodified Data Stream Management System, LINQ and C#, (2) the StreamInsight++ ad hoc framework, an enriched version of StreamInsight, that meets pervasive application requirements, and (3) our SoCQ system, designed for managing data, streams and services in a unified manner. We define five tasks that we implement in the analysed systems, based on core needs for pervasive application development. To evaluate the tasks’ implementations, we introduce a set of metrics and provide the experimental results. Our study allows differentiating between the proposed types of systems based on their strengths and weaknesses when building pervasive applications

    UPnQ: an Architecture for Personal Information Exploration

    No full text
    International audienceToday our lives are being mapped to the binary realm provided by computing devices and their interconnections. The constant increase in both amount and diversity of personal information organized in digital files already turned into an information overload. User files contain an ever augmenting quantity of potential information that can be extracted at a non-negligible processing cost. Managing user files in this context becomes cumbersome. In this paper we pursue the difficult objective of providing easy and efficient personal information management, in a file-oriented context. To this end, we propose the Universal Plug'n'Query (UPnQ) principled approach for Personal Information Management. UPnQ is based on a virtual database that offers query facilities over potential information from files while tuning resource usage. Our goal is to declaratively query the contents of dynamically discovered files at a fine-grained level. We present a prototype that proves the feasibility of our approach and we conduct a simulation study that explores different caching strategies

    UPnQ: an Architecture for Personal Information Exploration

    No full text
    International audienceToday our lives are being mapped to the binary realm provided by computing devices and their interconnections. The constant increase in both amount and diversity of personal information organized in digital files already turned into an information overload. User files contain an ever augmenting quantity of potential information that can be extracted at a non-negligible processing cost. Managing user files in this context becomes cumbersome. In this paper we pursue the difficult objective of providing easy and efficient personal information management, in a file-oriented context. To this end, we propose the Universal Plug'n'Query (UPnQ) principled approach for Personal Information Management. UPnQ is based on a virtual database that offers query facilities over potential information from files while tuning resource usage. Our goal is to declaratively query the contents of dynamically discovered files at a fine-grained level. We present a prototype that proves the feasibility of our approach and we conduct a simulation study that explores different caching strategies

    Requêtes discriminantes pour l'exploration des données

    No full text
    National audienceÀ l'ère du Big Data, les profils d'utilisateurs deviennent de plus en plus diversifiés et les données de plus en plus complexes, rendant souvent très difficile l'exploration des données. Dans cet article, nous proposons une technique de réécriture de requêtes pour aider les analystes à formuler leurs interrogations , pour explorer rapidement et intuitivement les données. Nous intro-duisons les requêtes discriminantes, une restriction syntaxique de SQL, avec une condition de sélection qui dissocie des exemples positifs et négatifs. Nous construisons un ensemble de données d'apprentissage dont les exemples positifs correspondent aux résultats souhaités par l'analyste, et les exemples négatifs à ceux qu'il ne veut pas. En utilisant des techniques d'apprentissage automatique, la requête initiale est reformulée en une nouvelle requête, qui amorce un proces-sus itératif d'exploration des données. Nous avons implémenté cette idée dans un prototype (iSQL) et nous avons mené des expérimentations dans le domaine de l'astrophysique
    corecore