131,770 research outputs found
Measuring Software Process: A Systematic Mapping Study
Context: Measurement is essential to reach predictable performance and high capability processes. It provides
support for better understanding, evaluation, management, and control of the development process
and project, as well as the resulting product. It also enables organizations to improve and predict its processâs
performance, which places organizations in better positions to make appropriate decisions. Objective:
This study aims to understand the measurement of the software development process, to identify studies,
create a classification scheme based on the identified studies, and then to map such studies into the scheme
to answer the research questions. Method: Systematic mapping is the selected research methodology for this
study. Results: A total of 462 studies are included and classified into four topics with respect to their focus
and into three groups based on the publishing date. Five abstractions and 64 attributes were identified,
25 methods/models and 17 contexts were distinguished. Conclusion: capability and performance were the
most measured process attributes, while effort and performance were the most measured project attributes.
Goal Question Metric and Capability Maturity Model Integration were the main methods and models used
in the studies, whereas agile/lean development and small/medium-size enterprise were the most frequently
identified research contexts.Ministerio de EconomĂa y Competitividad TIN2013-46928-C3-3-RMinisterio de EconomĂa y Competitividad TIN2016-76956-C3-2- RMinisterio de EconomĂa y Competitividad TIN2015-71938-RED
Recommended from our members
Evaluating the resilience and security of boundaryless, evolving socio-technical Systems of Systems
Transparent government, not transparent citizens: a report on privacy and transparency for the Cabinet Office
1. Privacy is extremely important to transparency. The political legitimacy of a transparency programme will depend crucially on its ability to retain public confidence. Privacy protection should therefore be embedded in any transparency programme, rather than bolted on as an afterthought. 2. Privacy and transparency are compatible, as long as the former is carefully protected and considered at every stage. 3. Under the current transparency regime, in which public data is specifically understood not to include personal data, most data releases will not raise privacy concerns. However, some will, especially as we move toward a more demand-driven scheme. 4. Discussion about deanonymisation has been driven largely by legal considerations, with a consequent neglect of the input of the technical community. 5. There are no complete legal or technical fixes to the deanonymisation problem. We should continue to anonymise sensitive data, being initially cautious about releasing such data under the Open Government Licence while we continue to take steps to manage and research the risks of deanonymisation. Further investigation to determine the level of risk would be very welcome. 6. There should be a focus on procedures to output an auditable debate trail. Transparency about transparency â metatransparency â is essential for preserving trust and confidence. Fourteen recommendations are made to address these conclusions
Artificial intelligence and UK national security: Policy considerations
RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security.
The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data
Relevance, benefits, and problems of software modelling and model driven techniquesâA survey in the Italian industry
Context Claimed benefits of software modelling and model driven techniques are improvements in productivity, portability, maintainability and interoperability. However, little effort has been devoted at collecting evidence to evaluate their actual relevance, benefits and usage complications. Goal The main goals of this paper are: (1) assess the diffusion and relevance of software modelling and MD techniques in the Italian industry, (2) understand the expected and achieved benefits, and (3) identify which problems limit/prevent their diffusion. Method We conducted an exploratory personal opinion survey with a sample of 155 Italian software professionals by means of a Web-based questionnaire on-line from February to April 2011. Results Software modelling and MD techniques are very relevant in the Italian industry. The adoption of simple modelling brings common benefits (better design support, documentation improvement, better maintenance, and higher software quality), while MD techniques make it easier to achieve: improved standardization, higher productivity, and platform independence. We identified problems, some hindering adoption (too much effort required and limited usefulness) others preventing it (lack of competencies and supporting tools). Conclusions The relevance represents an important objective motivation for researchers in this area. The relationship between techniques and attainable benefits represents an instrument for practitioners planning the adoption of such techniques. In addition the findings may provide hints for companies and universitie
The Criminalisation of Migration in Europe: A State-of-the-Art of the Academic Literature and Research. CEPS Liberty and Security in Europe No. 61, October 2013
In the last 30 years, a clear trend has come to define modern immigration law and policy. A set of seemingly disparate developments concerning the constant reinforcement of border controls, tightening of conditions of entry, expanding capacities for detention and deportation and the proliferation of criminal sanctions for migration offences, accompanied by an anxiety on the part of the press, public and political establishment regarding migrant criminality can now be seen to form a definitive shift in the European Union towards the so-called âcriminalisation of migrationâ.
This paper aims to provide an overview of the âstate-of-the-artâ in the academic literature and EU research on criminalisation of migration in Europe. It analyses three key manifestations of the so-called âcrimmigrationâ trend: discursive criminalisation; the use of criminal law for migration management; and immigrant detention, focusing both on developments in domestic legislation of EU member states but also the increasing conflation of mobility, crime and security which has accompanied EU integration. By identifying the trends, synergies and gaps in the scholarly approaches dealing with the criminalisation of migration, the paper seeks to provide a framework for on-going research under Work Package 8 of the FIDUCIA project
The Precautionary Principle (with Application to the Genetic Modification of Organisms)
We present a non-naive version of the Precautionary (PP) that allows us to
avoid paranoia and paralysis by confining precaution to specific domains and
problems. PP is intended to deal with uncertainty and risk in cases where the
absence of evidence and the incompleteness of scientific knowledge carries
profound implications and in the presence of risks of "black swans", unforeseen
and unforeseable events of extreme consequence. We formalize PP, placing it
within the statistical and probabilistic structure of ruin problems, in which a
system is at risk of total failure, and in place of risk we use a formal
fragility based approach. We make a central distinction between 1) thin and fat
tails, 2) Local and systemic risks and place PP in the joint Fat Tails and
systemic cases. We discuss the implications for GMOs (compared to Nuclear
energy) and show that GMOs represent a public risk of global harm (while harm
from nuclear energy is comparatively limited and better characterized). PP
should be used to prescribe severe limits on GMOs
Data analytics and algorithms in policing in England and Wales: Towards a new policy framework
RUSI was commissioned by the Centre for Data Ethics and Innovation (CDEI) to conduct an independent study into the use of data analytics by police forces in England and Wales, with a focus on algorithmic bias. The primary purpose of the project is to inform CDEIâs review of bias in algorithmic decision-making, which is focusing on four sectors, including policing, and working towards a draft framework for the ethical development and deployment of data analytics tools for policing.
This paper focuses on advanced algorithms used by the police to derive insights, inform operational decision-making or make predictions. Biometric technology, including live facial recognition, DNA analysis and fingerprint matching, are outside the direct scope of this study, as are covert surveillance capabilities and digital forensics technology, such as mobile phone data extraction and computer forensics. However, because many of the policy issues discussed in this paper stem from general underlying data protection and human rights frameworks, these issues will also be relevant to other police technologies, and their use must be considered in parallel to the tools examined in this paper.
The project involved engaging closely with senior police officers, government officials, academics, legal experts, regulatory and oversight bodies and civil society organisations. Sixty nine participants took part in the research in the form of semi-structured interviews, focus groups and roundtable discussions. The project has revealed widespread concern across the UK law enforcement community regarding the lack of official national guidance for the use of algorithms in policing, with respondents suggesting that this gap should be addressed as a matter of urgency.
Any future policy framework should be principles-based and complement existing police guidance in a âtech-agnosticâ way. Rather than establishing prescriptive rules and standards for different data technologies, the framework should establish standardised processes to ensure that data analytics projects follow recommended routes for the empirical evaluation of algorithms within their operational context and evaluate the project against legal requirements and ethical standards. The new guidance should focus on ensuring multi-disciplinary legal, ethical and operational input from the outset of a police technology project; a standard process for model development, testing and evaluation; a clear focus on the humanâmachine interaction and the ultimate interventions a data driven process may inform; and ongoing tracking and mitigation of discrimination risk
- âŠ