13 research outputs found
Technology and Testing
From early answer sheets filled in with number 2 pencils, to tests administered by mainframe computers, to assessments wholly constructed by computers, it is clear that technology is changing the field of educational and psychological measurement. The numerous and rapid advances have immediate impact on test creators, assessment professionals, and those who implement and analyze assessments. This comprehensive new volume brings together leading experts on the issues posed by technological applications in testing, with chapters on game-based assessment, testing with simulations, video assessment, computerized test development, large-scale test delivery, model choice, validity, and error issues. Including an overview of existing literature and ground-breaking research, each chapter considers the technological, practical, and ethical considerations of this rapidly-changing area. Ideal for researchers and professionals in testing and assessment, Technology and Testing provides a critical and in-depth look at one of the most pressing topics in educational testing today
Programming Languages and Systems
This open access book constitutes the proceedings of the 31st European Symposium on Programming, ESOP 2022, which was held during April 5-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 21 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. They deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
Design revolutions: IASDR 2019 Conference Proceedings. Volume 4: Learning, Technology, Thinking
In September 2019 Manchester School of Art at Manchester Metropolitan University was honoured to host the bi-annual conference of the International Association of Societies of Design Research (IASDR) under the unifying theme of DESIGN REVOLUTIONS. This was the first time the conference had been held in the UK. Through key research themes across nine conference tracks – Change, Learning, Living, Making, People, Technology, Thinking, Value and Voices – the conference opened up compelling, meaningful and radical dialogue of the role of design in addressing societal and organisational challenges. This Volume 4 includes papers from Learning, Technology and Thinking tracks of the conference
How not to return to normal
In a March 2020 article published in Le Monde, Bruno Latour defined the Covid-19 emergency as "the big rehearsal" for the larger disaster to come: one that extends to all forms of life on Earth. The ongoing crisis, in his eyes, becomes both a risk and an opportunity to trial and develop new action plans necessary for the continuation of life. "The pandemic is a portal," wrote author Arundhati Roy a few days later, calling for a more equitable and sustainable post-pandemic future. The pandemic is an opportunity for un-learning and changing direction, particularly in how we approach risk and disaster. The dominant narrative for politicians and the media, however, is one of “returning to normal” as soon as possible, bouncing back, relying on established models of resilience based on the management of economic risk. They are also rehearsing, or modelling, worst- or best-case scenarios.
Artists, designers, and institutions are shaping discourses around the growing extinguishment of our resources, but also performing, visualising, simulating and modelling responses to possible risks and imagining resilience differently. Design and art can foster new visions, pilot new modes of communication and knowledge sharing, and drive the interdisciplinary collaborations necessary to address common issues. This panel explores ways in which art and design practices can be mobilized to transform current approaches to risk and disaster in imaginative, sustainable and equitable ways.
The papers selected for this session reflect a need to reassess, reframe, and reimagine the roles of museums, art and design, and thus contribute to a space for critical reflection to inform action, strategy, and practices. It is important to remember that our fields are far from immune from being complicit in the creation and reinforcement of the kinds of inequalities and injustices that have been made even more unmistakably clear in the last year: as Sasha Costanza-Shock, author of the book Design Justice, has pointed out, designers are ‘often unwittingly reproducing the existing structure of [...] who's going to benefit the most and who's going to be harmed the most by the tools or the objects or the systems or the buildings or spaces that we're designing.’ The urge to respond in an emergency, whether it's a design challenge in the context of COVID 19 or exhibition on climate change, requires space for critical thinking, inclusive conversation and production. This necessity comes across on the three papers brought together for this panel, and in the opening presentation by Emily Candela and Francesca Cavallo
BDWatchdog: real-time monitoring and profiling of Big Data applications and frameworks
This is a post-peer-review, pre-copyedit version of an article published in Future Generation Computer Systems. The final authenticated version is available online at: https://doi.org/10.1016/j.future.2017.12.068[Abstract] Current Big Data applications are characterized by a heavy use of system resources (e.g., CPU, disk) generally distributed across a cluster. To effectively improve their performance there is a critical need for an accurate analysis of both Big Data workloads and frameworks. This means to fully understand how the system resources are being used in order to identify potential bottlenecks, from resource to code bottlenecks. This paper presents BDWatchdog, a novel framework that allows real-time and scalable analysis of Big Data applications by combining time series for resource monitorization and flame graphs for code profiling, focusing on the processes that make up the workload rather than the underlying instances on which they are executed. This shift from the traditional system-based monitorization to a process-based analysis is interesting for new paradigms such as software containers or serverless computing, where the focus is put on applications and not on instances. BDWatchdog has been evaluated on a Big Data cloud-based service deployed at the CESGA supercomputing center. The experimental results show that a process-based analysis allows for a more effective visualization and overall improves the understanding of Big Data workloads. BDWatchdog is publicly available at http://bdwatchdog.dec.udc.es.Ministerio de Economía, Industria y Competitividad; TIN2016-75845-PMinsiterio de Educación; FPU15/0338
A Scalable Parallel Algorithm for the Simulation of Structural Plasticity in the Brain
The neural network in the brain is not hard-wired. Even in the mature
brain, new connections between neurons are formed and existing ones are
deleted, which is called structural plasticity. The dynamics of the
connectome is key to understanding how learning, memory, and healing after
lesions such as stroke work. However, with current experimental techniques
even the creation of an exact static connectivity map, which is required
for various brain simulations, is very difficult.
One alternative is to use simulation based on network models to predict the
evolution of synapses between neurons based on their specified activity
targets. This is particularly useful as experimental measurements of the
spiking frequency of neurons are more easily accessible and reliable than
biological connectivity data. The Model of Structural Plasticity (MSP) by
Butz and van Ooyen is an example of this approach. In traditional models,
connectivity between neurons is fixed while plasticity merely arises from
changes in the strength of existing synapses, typically modeled as weight
factors. MSP, in contrast, models a synapse as a connection between an
"axonal" plug and a "dendritic" socket. These synaptic elements grow and
shrink independently on each neuron. When an axonal element of one neuron
connects to the dendritic element of another neuron, a new synapse is
formed. Conversely, when a synaptic element bound in a synapse retracts,
the corresponding synapse is removed. The governing idea of the model is
that plasticity in cortical networks is driven by the need of individual
neurons to homeostatically maintain their average electrical activity.
However, to predict which neurons connect to each other, the current MSP
model computes probabilities for all pairs of neurons, resulting in a
complexity O(n^2). To enable large-scale simulations with millions of
neurons and beyond, this quadratic term is prohibitive. Inspired by
hierarchical methods for solving n-body problems in particle physics, this
dissertation presents a scalable approximation algorithm for simulating
structural plasticity based on MSP.
To scale MSP to millions of neurons, we adapt the Barnes-Hut algorithm as
used in gravitational particle simulations to a scalable solution for the
simulation of structural plasticity in the brain with a time complexity of
O(n log^2 n) instead of O(n^2). Then, we show through experimental
validation that the approximation underlying the algorithm does not
adversely affect the quality of the results. For this purpose, we compare
neural networks created by the original MSP with those created by our
approximation of it using graph metrics.
Finally, we prove that our scalable approximation algorithm can simulate
the dynamics of the connectome with 10^9 neurons - four orders of
magnitude more than the naive O(n^2) version, and two orders less
than the human brain. We present an MPI-based scalable implementation of
the scalable algorithm and our performance extrapolations predict that,
given sufficient compute resources, even with today's technology a
full-scale simulation of the human brain with 10^11 neurons is possible
in principle.
Until now, the scale of the largest structural plasticity simulations of
MSP in terms of the number of neurons corresponded to that of a fruit fly.
Our approximation algorithm goes a significant step further, reaching a
scale similar to that of a galago primate. Additionally, large-scale brain
connectivity maps can now be grown from scratch and their evolution after
destructive events such as stroke can be simulated