705 research outputs found

    Quantum surveillance and 'shared secrets'. A biometric step too far? CEPS Liberty and Security in Europe, July 2010

    Get PDF
    It is no longer sensible to regard biometrics as having neutral socio-economic, legal and political impacts. Newer generation biometrics are fluid and include behavioural and emotional data that can be combined with other data. Therefore, a range of issues needs to be reviewed in light of the increasing privatisation of ‘security’ that escapes effective, democratic parliamentary and regulatory control and oversight at national, international and EU levels, argues Juliet Lodge, Professor and co-Director of the Jean Monnet European Centre of Excellence at the University of Leeds, U

    Final

    Get PDF
    The present report reviews the fundamental right to privacy and data protection which shall be assured to individuals and the Directive 95/46/EC which provides more detailed rules on how to establish protection in the case of biometric data processing. The present framework does not seem apt to cope with all issues and problems raised by biometric applications. The limited recent case law of the European Court of Human Rights and the Court of Justice sheds some light on some relevant issues, but does not answer all questions. The report provides an analysis of the use of biometric data and the applicable current legal framework in six countries. The research demonstrates that in various countries, position is taken against the central storage of biometric data because of the various additional risks such storage entails. Furthermore, some countries stress the risks of the use of biometric characteristics which leave traces (such as e.g., fingerprint, face, voice
). In general, controllers of biometric applications receive limited clear guidance as to how implement biometric applications. Because of conflicting approaches, general recommendations are made in this report with regard to the regulation of central storage of biometric data and various other aspects, including the need for transparency of biometric systems

    Large-scale Biometrics Deployment in Europe: Identifying Challenges and Threats

    Get PDF
    With large-scale biometrics deployment in the EU still in its infancy and with stakeholders racing to position themselves in view of the lucrative market that is forecasted, a study to identify challenges and threats that need to be dealt with was launched. This is the result: a report on Biometrics large-scale Deployment in Europe. The report tackles three main issues namely, the status, security / privacy and testing / certification processes. A survey was launched so as to help reveal the actual status of Biometrics large-scale Deployment initiatives in EU. The main outcome of the survey was that an open dissemination of implementation results policy is needed mainly on deployment plans, strategies, barriers and best practices. The security/ privacy challenges study identified a number of issues, the most important of which were related to proportionality and compliance to the existing regulatory framework while at the same time it revealed an important number of related actions aiming at ensuring both data security and privacy. The aim of the Bio Testing Europe study was double: to identify and collect comparable and certified results under different technologies, vendors and environments situations and to feed in this information to animate discussion among the members of a European network which would enhance the European testing and certification capacity. The study presents an integrated picture of the identified issues as well as a number of recommendations. With some of the systems that are being implemented involving millions of individuals as target users it is important for policy makers to adopt some of the options presented so as to address the identified through the study challengesJRC.J.4-Information Societ

    Privacy & law enforcement

    Get PDF

    The potential use of smart cards in vehicle management with particular reference to the situation in Western Australia

    Get PDF
    Vehicle management may be considered to consist of traffic management, usage control, maintenance, and security. Various regulatory authorities undertake the first aspect, fleet managers will be concerned with all aspects, and owner-drivers will be interested mainly in maintenance and security. Car theft poses a universal security problem. Personalisation, including navigational assistance, might be achieved as a by-product of an improved management system. Authorities and fleet managers may find smartcards to be key components of an improved system, but owners may feel that the need for improved security does not justify its cost. This thesis seeks to determine whether smartcards may be used to personalise vehicles in order to improve vehicle management within a forseeable time and suggest when it might happen. In the process four broad questions are addressed. ‱ First, what improvements in technology are needed to make any improved scheme using smartcards practicable, and what can be expected in the near future? ‱ Second, what problems and difficulties may impede the development of improved management? ‱ Third, what non-vehicle applications might create an environment in which a viable scheme could emerge? ‱ Finally, is there a perceived need for improved vehicle management? The method involved a literature search, the issue of questionnaires to owner drivers and fleet managers, discussions with fleet managers, the preparation of data-flow and state diagrams, and the construction of a simulation of a possible security approach. The study concludes that although vehicle personalisation is possible- and desirable it is unlikely to occur within the next decade because the environment needed to make it practicable will not emerge until a number of commercial and standardisation problems that obstruct all smartcard applications have been solved

    Risking lives: Smart borders, private interests and AI policy in Europe

    Get PDF
    Recent years have seen huge investment in, and advancement of, technologically aided border controls, from biometric databases for identification to unmanned drones for external border surveillance. Data infrastructures and Artificial Intelligence (AI), often from private providers, are playing an increasingly pivotal role in attempts to predict, prevent and control often illegalised mobility into and across Europe. At the same time, the European Union is in the final stages of negotiating and adopting a final text of the proposed AI act, the inaugural EU legislation designed to establish comprehensive protections and safeguards with regards to the development, application and use of AI technology. This report explores and interrogates the interplay between smart borders, private interests, and policy surrounding AI within Europe. It does so to make apparent how the concept of ‘risk’ is integral to the advancement of smart border controls, while concurrently providing the framework for the governance of data infrastructures and AI. This highlights how AI is both embedded within and entrenching particular approaches to migration controls. To understand the relationship between smart borders, private interests and AI policy, we explore four components of smart borders in Europe: the development of ‘Fortress Europe’ in terms of securitisation, militarisation, and externalisation; technology used in smart borders; funding and profits; and AI policy. The report demonstrates that the concept of ‘risk’ in the context of migration and AI is used as both a legitimisation and regulatory tool. On the one hand, we see risk used to legitimise the ongoing investment in and development of hi-tech surveillance and AI at the border to prevent illegalised migrants from reaching European territory. Here, illegalised migrants are portrayed as a security issue and threat to Europe. On the other hand, the language of risk is also adopted as a regulatory tool to categorise AI applications within the AI act. Within these policy developments, we maintain that it is essential to include an exploration of the role of private defence and security companies and, as we investigate, their lobbying activities throughout the development of the AI act. These companies stand to make huge profits from the development of smart, securitised borders, seen as the answer to the problem of ‘risky’ migrants. From this, we end by considering the extent to which the AI act fails to benefit and protect those most affected by the harmful effects of smart borders
    • 

    corecore