1,178 research outputs found
The ethics of forgetting in an age of pervasive computing
In this paper, we examine the potential of pervasive computing to create widespread
sousveillance, that will complement surveillance, through the development of lifelogs;
socio-spatial archives that document every action, every event, every
conversation, and every material expression of an individual’s life. Examining lifelog
projects and artistic critiques of sousveillance we detail the projected mechanics
of life-logging and explore their potential implications. We suggest, given that lifelogs
have the potential to convert exterior generated oligopticons to an interior
panopticon, that an ethics of forgetting needs to be developed and built into the
development of life-logging technologies. Rather than seeing forgetting as a
weakness or a fallibility we argue that it is an emancipatory process that will free
pervasive computing from burdensome and pernicious disciplinary effects
Chemical Properties from Graph Neural Network-Predicted Electron Densities
According to density functional theory, any chemical property can be inferred
from the electron density, making it the most informative attribute of an
atomic structure. In this work, we demonstrate the use of established physical
methods to obtain important chemical properties from model-predicted electron
densities. We introduce graph neural network architectural choices that provide
physically relevant and useful electron density predictions. Despite not
training to predict atomic charges, the model is able to predict atomic charges
with an order of magnitude lower error than a sum of atomic charge densities.
Similarly, the model predicts dipole moments with half the error of the sum of
atomic charge densities method. We demonstrate that larger data sets lead to
more useful predictions in these tasks. These results pave the way for an
alternative path in atomistic machine learning, where data-driven approaches
and existing physical methods are used in tandem to obtain a variety of
chemical properties in an explainable and self-consistent manner
The discomforting rise of ' public geographies': a 'public' conversation.
In this innovative and provocative intervention, the authors explore the burgeoning ‘public turn’ visible across the social sciences to espouse the need to radically challenge and reshape dominant and orthodox visions of ‘the academy’, academic life, and the role and purpose of the academic
Geographies of the COVID-19 pandemic
The spread of the novel coronavirus (SARS-CoV-2) has resulted in the most devastating global public health crisis in over a century. At present, over 10 million people from around the world have contracted the Coronavirus Disease 2019 (COVID-19), leading to more than 500,000 deaths globally. The global health crisis unleashed by the COVID-19 pandemic has been compounded by political, economic, and social crises that have exacerbated existing inequalities and disproportionately affected the most vulnerable segments of society. The global pandemic has had profoundly geographical consequences, and as the current crisis continues to unfold, there is a pressing need for geographers and other scholars to critically examine its fallout. This introductory article provides an overview of the current special issue on the geographies of the COVID-19 pandemic, which includes 42 commentaries written by contributors from across the globe. Collectively, the contributions in this special issue highlight the diverse theoretical perspectives, methodological approaches, and thematic foci that geographical scholarship can offer to better understand the uneven geographies of the Coronavirus/COVID-19. </jats:p
From Molecules to Materials: Pre-training Large Generalizable Models for Atomic Property Prediction
Foundation models have been transformational in machine learning fields such
as natural language processing and computer vision. Similar success in atomic
property prediction has been limited due to the challenges of training
effective models across multiple chemical domains. To address this, we
introduce Joint Multi-domain Pre-training (JMP), a supervised pre-training
strategy that simultaneously trains on multiple datasets from different
chemical domains, treating each dataset as a unique pre-training task within a
multi-task framework. Our combined training dataset consists of 120M
systems from OC20, OC22, ANI-1x, and Transition-1x. We evaluate performance and
generalization by fine-tuning over a diverse set of downstream tasks and
datasets including: QM9, rMD17, MatBench, QMOF, SPICE, and MD22. JMP
demonstrates an average improvement of 59% over training from scratch, and
matches or sets state-of-the-art on 34 out of 40 tasks. Our work highlights the
potential of pre-training strategies that utilize diverse data to advance
property prediction across chemical domains, especially for low-data tasks
Orchestrating Tuple-based Languages
The World Wide Web can be thought of as a global computing architecture supporting the deployment of distributed networked applications. Currently, such applications can be programmed by resorting mainly to two distinct paradigms: one devised for orchestrating distributed services, and the other designed for coordinating distributed (possibly mobile) agents. In this paper, the issue of designing a pro-
gramming language aiming at reconciling orchestration and coordination is investigated. Taking as starting point the orchestration calculus Orc and the tuple-based coordination language Klaim, a new formalism is introduced combining concepts and primitives of the original calculi.
To demonstrate feasibility and effectiveness of the proposed approach, a prototype implementation of the new formalism is described and it is then used to tackle a case study dealing with a simplified but realistic electronic marketplace, where a number of on-line stores allow client
applications to access information about their goods and to place orders
Forecasting in the light of Big Data
Predicting the future state of a system has always been a natural motivation
for science and practical applications. Such a topic, beyond its obvious
technical and societal relevance, is also interesting from a conceptual point
of view. This owes to the fact that forecasting lends itself to two equally
radical, yet opposite methodologies. A reductionist one, based on the first
principles, and the naive inductivist one, based only on data. This latter view
has recently gained some attention in response to the availability of
unprecedented amounts of data and increasingly sophisticated algorithmic
analytic techniques. The purpose of this note is to assess critically the role
of big data in reshaping the key aspects of forecasting and in particular the
claim that bigger data leads to better predictions. Drawing on the
representative example of weather forecasts we argue that this is not generally
the case. We conclude by suggesting that a clever and context-dependent
compromise between modelling and quantitative analysis stands out as the best
forecasting strategy, as anticipated nearly a century ago by Richardson and von
Neumann
Roadmaps to Utopia: Tales of the Smart City
Notions of the Smart City are pervasive in urban development discourses. Various frameworks for the development of smart cities, often conceptualized as roadmaps, make a number of implicit claims about how smart city projects proceed but the legitimacy of those claims is unclear. This paper begins to address this gap in knowledge. We explore the development of a smart transport application, MotionMap, in the context of a £16M smart city programme taking place in Milton Keynes, UK. We examine how the idealized smart city narrative was locally inflected, and discuss the differences between the narrative and the processes and outcomes observed in Milton Keynes. The research shows that the vision of data-driven efficiency outlined in the roadmaps is not universally compelling, and that different approaches to the sensing and optimization of urban flows have potential for empowering or disempowering different actors. Roadmaps tend to emphasize the importance of delivering quick practical results. However, the benefits observed in Milton Keynes did not come from quick technical fixes but from a smart city narrative that reinforced existing city branding, mobilizing a growing network of actors towards the development of a smart region. Further research is needed to investigate this and other smart city developments, the significance of different smart city narratives, and how power relationships are reinforced and constructed through them
- …