11,706 research outputs found
Democratising migration governance : temporary labour migration and the responsibility to represent
Defence date: 20 January 2020Examining Board: Professor Rainer Bauböck, European University Institute (Supervisor); Professor Richard Bellamy, European University Institute Professor; Iseult Honohan, University College Dublin; Professor Valeria Ottonelli, Università degli Studi di GenovaThis thesis explores the possibility of democratic citizenship of temporary migrants. The main problem I investigate is the persistent and systemic vulnerability of temporary migrants to domination. I argue temporary migrants’ vulnerability to domination stems primarily from the fact that responsibilities towards them and their political membership are divided between their country of residence and of origin. While their lives are conditioned by both countries, they are democratically isolated from both. Are they merely partial citizens detached from any democratic politics? If not, what responsibility should each country bear towards temporary migrants within and beyond their jurisdictions? Should our commitments to democracy lead us to endorse a radical conception of migrant citizenship through which migrants represent their interests and perspectives in-between their country of residence and origin? This thesis addresses these normative issues surrounding temporary labour migration. It develops a democratic theory applicable to this phenomenon, explores the moral and political basis of migrants’ freedom, and explains how the current arrangements might be changed to produce a more democratically just outcome. Its main contribution lies in establishing a new account of democratic citizenship and responsibility that coherently accommodates the political agencies of temporary migrants. The thesis introduces, in particular, a new normative concept and political agenda – the Responsibility to Represent (R2R). Under a system of R2R, both sending and receiving countries bear a shared obligation to stage migrants’ contestatory voices in their public policy-making process for creating a society where everyone is free from domination. In summary, I argue that temporary migration programmes are just and legitimate, if and only if both sending and receiving states (1) recognise temporary migrants as bearers of a distinct life plan deserving equal treatment and non-domination, (2) provide them with necessary protections and sufficient resources for carrying out their plans while accommodating their possible changes, and (3) institutionalise contestatory channels for them to (de)legitimise the current structure of responsibility in-between two states
Out of kernel tuning and optimizations for portable large-scale docking experiments on GPUs
Virtual screening is an early stage in the drug discovery process that selects the most promising candidates. In the urgent computing scenario, finding a solution in the shortest time frame is critical. Any improvement in the performance of a virtual screening application translates into an increase in the number of candidates evaluated, thereby raising the probability of finding a drug. In this paper, we show how we can improve application throughput using Out-of-kernel optimizations. They use input features, kernel requirements, and architectural features to rearrange the kernel inputs, executing them out of order, to improve the computation efficiency. These optimizations’ implementations are designed on an extreme-scale virtual screening application, named LiGen, that can hinge on CUDA and SYCL kernels to carry out the computation on modern supercomputer nodes. Even if they are tailored to a single application, they might also be of interest for applications that share a similar design pattern. The experimental results show how these optimizations can increase kernel performance by 2 X, respectively, up to 2.2X in CUDA and up to 1.9X, in SYCL. Moreover, the reported speedup can be achieved with the best-proposed parameterization, as shown by the data we collected and reported in this manuscript
EPSILOD: efficient parallel skeleton for generic iterative stencil computations in distributed GPUs
ProducciĂłn CientĂficaIterative stencil computations are widely used in numerical simulations. They
present a high degree of parallelism, high locality and mostly-coalesced memory
access patterns. Therefore, GPUs are good candidates to speed up their computa-
tion. However, the development of stencil programs that can work with huge grids in
distributed systems with multiple GPUs is not straightforward, since it requires solv-
ing problems related to the partition of the grid across nodes and devices, and the
synchronization and data movement across remote GPUs. In this work, we present
EPSILOD, a high-productivity parallel programming skeleton for iterative stencil
computations on distributed multi-GPUs, of the same or different vendors that sup-
ports any type of n-dimensional geometric stencils of any order. It uses an abstract
specification of the stencil pattern (neighbors and weights) to internally derive the
data partition, synchronizations and communications. Computation is split to better
overlap with communications. This paper describes the underlying architecture of
EPSILOD, its main components, and presents an experimental evaluation to show
the benefits of our approach, including a comparison with another state-of-the-art
solution. The experimental results show that EPSILOD is faster and shows good
strong and weak scalability for platforms with both homogeneous and heterogene-
ous types of GPUJunta de Castilla y LeĂłn, Ministerio de EconomĂa, Industria y Competitividad, y Fondo Europeo de Desarrollo Regional (FEDER): Proyecto PCAS (TIN2017-88614-R) y Proyecto PROPHET-2 (VA226P20).Ministerio de Ciencia e InnovaciĂłn, Agencia Estatal de InvestigaciĂłn y “European Union NextGenerationEU/PRTR” : (MCIN/ AEI/10.13039/501100011033) - grant TED2021-130367B-I00CTE-POWER and Minotauro and the technical support provided by Barcelona Supercomputing Center (RES-IM-2021-2-0005, RES-IM-2021-3-0024, RES- IM-2022-1-0014).PublicaciĂłn en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y LeĂłn (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEĂ“N, ActuaciĂłn:20007-CL - Apoyo Consorcio BUCL
Software Design Change Artifacts Generation through Software Architectural Change Detection and Categorisation
Software is solely designed, implemented, tested, and inspected by expert people, unlike other engineering projects where they are mostly implemented by workers (non-experts) after designing by engineers. Researchers and practitioners have linked software bugs, security holes, problematic integration of changes, complex-to-understand codebase, unwarranted mental pressure, and so on in software development and maintenance to inconsistent and complex design and a lack of ways to easily understand what is going on and what to plan in a software system. The unavailability of proper information and insights needed by the development teams to make good decisions makes these challenges worse. Therefore, software design documents and other insightful information extraction are essential to reduce the above mentioned anomalies. Moreover, architectural design artifacts extraction is required to create the developer’s profile to be available to the market for many crucial scenarios. To that end, architectural change detection, categorization, and change description generation are crucial because they are the primary artifacts to trace other software artifacts.
However, it is not feasible for humans to analyze all the changes for a single release for detecting change and impact because it is time-consuming, laborious, costly, and inconsistent. In this thesis, we conduct six studies considering the mentioned challenges to automate the architectural change information extraction and document generation that could potentially assist the development and maintenance teams. In particular, (1) we detect architectural changes using lightweight techniques leveraging textual and codebase properties, (2) categorize them considering intelligent perspectives, and (3) generate design change documents by exploiting precise contexts of components’ relations and change purposes which were previously unexplored. Our experiment using 4000+ architectural change samples and 200+ design change documents suggests that our proposed approaches are promising in accuracy and scalability to deploy frequently. Our proposed change detection approach can detect up to 100% of the architectural change instances (and is very scalable). On the other hand, our proposed change classifier’s F1 score is 70%, which is promising given the challenges. Finally, our proposed system can produce descriptive design change artifacts with 75% significance. Since most of our studies are foundational, our approaches and prepared datasets can be used as baselines for advancing research in design change information extraction and documentation
A systematic literature review on source code similarity measurement and clone detection: techniques, applications, and challenges
Measuring and evaluating source code similarity is a fundamental software
engineering activity that embraces a broad range of applications, including but
not limited to code recommendation, duplicate code, plagiarism, malware, and
smell detection. This paper proposes a systematic literature review and
meta-analysis on code similarity measurement and evaluation techniques to shed
light on the existing approaches and their characteristics in different
applications. We initially found over 10000 articles by querying four digital
libraries and ended up with 136 primary studies in the field. The studies were
classified according to their methodology, programming languages, datasets,
tools, and applications. A deep investigation reveals 80 software tools,
working with eight different techniques on five application domains. Nearly 49%
of the tools work on Java programs and 37% support C and C++, while there is no
support for many programming languages. A noteworthy point was the existence of
12 datasets related to source code similarity measurement and duplicate codes,
of which only eight datasets were publicly accessible. The lack of reliable
datasets, empirical evaluations, hybrid methods, and focuses on multi-paradigm
languages are the main challenges in the field. Emerging applications of code
similarity measurement concentrate on the development phase in addition to the
maintenance.Comment: 49 pages, 10 figures, 6 table
Towards A Practical High-Assurance Systems Programming Language
Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation.
Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code.
To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process
Complicated objects: artifacts from the Yuanming Yuan in Victorian Britain
The 1860 spoliation of the Summer Palace at the close of the Second Opium War by British and French troops was a watershed event within the development of Britain as an imperialist nation, which guaranteed a market for opium produced in its colony India and demonstrated the power of its armed forces. The distribution of the spoils to officers and diplomatic corps by campaign leaders in Beijing was also a sign of the British Army’s rising power as an instrument of the imperialist state. These conditions would suggest that objects looted from the site would be integrated into an imperialist aesthetic that reflected and promoted the material benefits of military engagement overseas and foregrounded the circumstances of their removal to Britain for campaign members and the British public.
This study mines sources dating to the two decades following the war – including British newspapers, auction house records, exhibition catalogs and works of art – to test this hypothesis. Findings show that initial movements of looted objects through the military and diplomatic corps did reinforce notions of imperialist power by enabling campaign members to profit from the spoliation through sales of looted objects and trophy displays. However, material from the Summer Palace arrived at a moment when British manufacturers and cultural leaders were engaged in a national effort to improve the quality of British goods to compete in the international marketplace and looted art was quickly interpolated in this national conversation. Ironically, the same “free trade” imperatives that motivated the invasion energized a new design movement that embraced Chinese ornament.
As a consequence, political interpretations of the material outside of military collections were quickly joined by a strong response to Chinese ornament from cultural institutions and design leaders. Art from the Summer Palace held a prominent place at industrial art exhibitions of the postwar period and inspired new designs in a number of mediums. While the availability of Chinese imperial art was the consequence of a military invasion and therefore a product of imperialist expansion, evidence presented here shows that the design response to looted objects was not circumscribed by this political reality. Chinese ornament on imperial wares was ultimately celebrated for its formal qualities and acknowledged links to the Summer Palace were an indicator of good design, not a celebration of victory over a failed Chinese state. Therefore, the looting of the Summer Palace was ultimately an essential factor in the development of modern design, the essence of which is a break with Classical ornament
Ethnographies of Collaborative Economies across Europe: Understanding Sharing and Caring
"Sharing economy" and "collaborative economy" refer to a proliferation of initiatives, business models, digital platforms and forms of work that characterise contemporary life: from community-led initiatives and activist campaigns, to the impact of global sharing platforms in contexts such as network hospitality, transportation, etc. Sharing the common lens of ethnographic methods, this book presents in-depth examinations of collaborative economy phenomena. The book combines qualitative research and ethnographic methodology with a range of different collaborative economy case studies and topics across Europe. It uniquely offers a truly interdisciplinary approach. It emerges from a unique, long-term, multinational, cross-European collaboration between researchers from various disciplines (e.g., sociology, anthropology, geography, business studies, law, computing, information systems), career stages, and epistemological backgrounds, brought together by a shared research interest in the collaborative economy. This book is a further contribution to the in-depth qualitative understanding of the complexities of the collaborative economy phenomenon. These rich accounts contribute to the painting of a complex landscape that spans several countries and regions, and diverse political, cultural, and organisational backdrops. This book also offers important reflections on the role of ethnographic researchers, and on their stance and outlook, that are of paramount interest across the disciplines involved in collaborative economy research
The Sparse Abstract Machine
We propose the Sparse Abstract Machine (SAM), an abstract machine model for
targeting sparse tensor algebra to reconfigurable and fixed-function spatial
dataflow accelerators. SAM defines a streaming dataflow abstraction with sparse
primitives that encompass a large space of scheduled tensor algebra
expressions. SAM dataflow graphs naturally separate tensor formats from
algorithms and are expressive enough to incorporate arbitrary iteration
orderings and many hardware-specific optimizations. We also present Custard, a
compiler from a high-level language to SAM that demonstrates SAM's usefulness
as an intermediate representation. We automatically bind from SAM to a
streaming dataflow simulator. We evaluate the generality and extensibility of
SAM, explore the performance space of sparse tensor algebra optimizations using
SAM, and show SAM's ability to represent dataflow hardware.Comment: 18 pages, 17 figures, 3 table
- …