14,301 research outputs found
Lightweight Multilingual Software Analysis
Developer preferences, language capabilities and the persistence of older
languages contribute to the trend that large software codebases are often
multilingual, that is, written in more than one computer language. While
developers can leverage monolingual software development tools to build
software components, companies are faced with the problem of managing the
resultant large, multilingual codebases to address issues with security,
efficiency, and quality metrics. The key challenge is to address the opaque
nature of the language interoperability interface: one language calling
procedures in a second (which may call a third, or even back to the first),
resulting in a potentially tangled, inefficient and insecure codebase. An
architecture is proposed for lightweight static analysis of large multilingual
codebases: the MLSA architecture. Its modular and table-oriented structure
addresses the open-ended nature of multiple languages and language
interoperability APIs. We focus here as an application on the construction of
call-graphs that capture both inter-language and intra-language calls. The
algorithms for extracting multilingual call-graphs from codebases are
presented, and several examples of multilingual software engineering analysis
are discussed. The state of the implementation and testing of MLSA is
presented, and the implications for future work are discussed.Comment: 15 page
The future of social is personal: the potential of the personal data store
This chapter argues that technical architectures that facilitate the longitudinal, decentralised and individual-centric personal collection and curation of data will be an important, but partial, response to the pressing problem of the autonomy of the data subject, and the asymmetry of power between the subject and large scale service providers/data consumers. Towards framing the scope and role of such Personal Data Stores (PDSes), the legalistic notion of personal data is examined, and it is argued that a more inclusive, intuitive notion expresses more accurately what individuals require in order to preserve their autonomy in a data-driven world of large aggregators. Six challenges towards realising the PDS vision are set out: the requirement to store data for long periods; the difficulties of managing data for individuals; the need to reconsider the regulatory basis for third-party access to data; the need to comply with international data handling standards; the need to integrate privacy-enhancing technologies; and the need to future-proof data gathering against the evolution of social norms. The open experimental PDS platform INDX is introduced and described, as a means of beginning to address at least some of these six challenges
Meeting the Challenge of Interdependent Critical Networks under Threat : The Paris Initiative
NARisques Ć grande Ć©chelle;Gestion des crises internationale;InterdĆ©pendances;Infrastructures critiques;Anthrax;Initiative collective;StratĆ©gie;PrĆ©paration des Etats-majors
Blaming the victim, all over again: Waddell and Aylward's biopsychosocial (BPS) model of disability
The biopsychosocial (BPS) model of mental distress, originally conceived by the American psychiatrist George Engel in the 1970s and commonly used in psychiatry and psychology, has been adapted by Gordon Waddell and Mansell Aylward to form the theoretical basis for current UK Government thinking on disability. Most importantly, the Waddell and Aylward version of the BPS has played a key role as the Government has sought to reform spending on out-of- work disability benefits. This paper presents a critique of Waddell and Aylwardās model, examining its origins, its claims and the evidence it employs. We will argue that its potential for genuine inter-disciplinary cooperation and the holistic and humanistic benefits for disabled people as envisaged by Engel are not now, if they ever have been, fully realized. Any potential benefit it may have offered has been eclipsed by its role in Coalition/Conservative government social welfare policies that have blamed the victim and justified restriction of entitlements
Beyond the buzzword: big data and national security decision-making
This article explores the role big data plays in the national security decision-making process. The global surveillance disclosures initiated by former NSA contractor Edward Snowden have increased public and academic discussions about big data and national security. Yet, efforts to summarize and import insights from the vast and interdisciplinary literature on data analytics have remained rare in the field of security studies. To fill this gap, we explain the core characteristics of big data, provide an overview of the techniques and methods of data analytics, and explore how big data can support the core national security process of intelligence. Big data is not only defined by the volume of data but also by their velocity, variety and issues of veracity. Scientists have developed a number of techniques to extract information from big data and support national security practices. We find that data analytics tools contribute to and influence all the core intelligence functions in the contemporary US national security apparatus. However, these tools cannot replace the central role of humans and their ability to contextualize security threats. The fundamental value of big data lies in humans' ability to understand its power and mitigate its limits
Recommended from our members
Big Data in the Oil and Gas Industry: A Promising Courtship
The energy industry remains one of the highest money-producing and investment industries in the world. The United Statesā own economic stability depends greatly on the stability of oil and gas prices. Various factors affect the amount of money that will continue to be invested in producing oil. A main disadvantage to the oil and gas industry is its lack of technological adaptation. This weakens the industry because the surest measures are not currently being taken to produce oil in optimally efficient, safe, and cost-effective ways. Big data has gained global recognition as an opportunity to gather large volumes of information in real-time and translate data sets into actionable insights. In a low commodity price environment, saving time, reducing costs, and improving safety are crucial outcomes that can be realized using machine learning in oil and gas operations. Big data provides the opportunity to use unsupervised learning. For example, with this approach, engineers can predict oil wellsā optimal barrels of production given the completion data in a specific area. However, a caveat to utilizing big data in the oil and gas industry is that there simply is neither enough physical data nor data velocity in the industry to be properly referred to as ābig data.ā Big data, as it develops, will nonetheless significantly change the energy business in the future, as it already has in various other industries.Petroleum and Geosystems Engineerin
Poverty, Poaching and Trafficking: What are the links?
A rapid review of academic and grey literature revealed that the links between poverty, poaching and trafficking are under-researched and poorly understood. Yet, the assumption that poaching occurs because of poverty is omnipresent, with little āhard evidenceā to support the claim. Despite this, the authors are confident that the links are there, based on the evidence gathered. However, their understandings are hampered by a series of factors: trafficking and poaching are overwhelmingly framed as an issue of conservation/biodiversity loss rather than of poverty and development; it is difficult to collect clear and detailed data on poaching precisely because of its illicit nature; and many of the cases examined are also linked in with conflict zones, making research even more challenging
- ā¦