19 research outputs found

    A First Practical Fully Homomorphic Crypto-Processor Design: The Secret Computer is Nearly Here

    Get PDF
    Following a sequence of hardware designs for a fully homomorphic crypto-processor - a general purpose processor that natively runs encrypted machine code on encrypted data in registers and memory, resulting in encrypted machine states - proposed by the authors in 2014, we discuss a working prototype of the first of those, a so-called `pseudo-homomorphic' design. This processor is in principle safe against physical or software-based attacks by the owner/operator of the processor on user processes running in it. The processor is intended as a more secure option for those emerging computing paradigms that require trust to be placed in computations carried out in remote locations or overseen by untrusted operators. The prototype has a single-pipeline superscalar architecture that runs OpenRISC standard machine code in two distinct modes. The processor runs in the encrypted mode (the unprivileged, `user' mode, with a long pipeline) at 60-70% of the speed in the unencrypted mode (the privileged, `supervisor' mode, with a short pipeline), emitting a completed encrypted instruction every 1.67-1.8 cycles on average in real trials.Comment: 6 pages, draf

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Scene-Dependency of Spatial Image Quality Metrics

    Get PDF
    This thesis is concerned with the measurement of spatial imaging performance and the modelling of spatial image quality in digital capturing systems. Spatial imaging performance and image quality relate to the objective and subjective reproduction of luminance contrast signals by the system, respectively; they are critical to overall perceived image quality. The Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) describe the signal (contrast) transfer and noise characteristics of a system, respectively, with respect to spatial frequency. They are both, strictly speaking, only applicable to linear systems since they are founded upon linear system theory. Many contemporary capture systems use adaptive image signal processing, such as denoising and sharpening, to optimise output image quality. These non-linear processes change their behaviour according to characteristics of the input signal (i.e. the scene being captured). This behaviour renders system performance “scene-dependent” and difficult to measure accurately. The MTF and NPS are traditionally measured from test charts containing suitable predefined signals (e.g. edges, sinusoidal exposures, noise or uniform luminance patches). These signals trigger adaptive processes at uncharacteristic levels since they are unrepresentative of natural scene content. Thus, for systems using adaptive processes, the resultant MTFs and NPSs are not representative of performance “in the field” (i.e. capturing real scenes). Spatial image quality metrics for capturing systems aim to predict the relationship between MTF and NPS measurements and subjective ratings of image quality. They cascade both measures with contrast sensitivity functions that describe human visual sensitivity with respect to spatial frequency. The most recent metrics designed for adaptive systems use MTFs measured using the dead leaves test chart that is more representative of natural scene content than the abovementioned test charts. This marks a step toward modelling image quality with respect to real scene signals. This thesis presents novel scene-and-process-dependent MTFs (SPD-MTF) and NPSs (SPDNPS). They are measured from imaged pictorial scene (or dead leaves target) signals to account for system scene-dependency. Further, a number of spatial image quality metrics are revised to account for capture system and visual scene-dependency. Their MTF and NPS parameters were substituted for SPD-MTFs and SPD-NPSs. Likewise, their standard visual functions were substituted for contextual detection (cCSF) or discrimination (cVPF) functions. In addition, two novel spatial image quality metrics are presented (the log Noise Equivalent Quanta (NEQ) and Visual log NEQ) that implement SPD-MTFs and SPD-NPSs. The metrics, SPD-MTFs and SPD-NPSs were validated by analysing measurements from simulated image capture pipelines that applied either linear or adaptive image signal processing. The SPD-NPS measures displayed little evidence of measurement error, and the metrics performed most accurately when they used SPD-NPSs measured from images of scenes. The benefit of deriving SPD-MTFs from images of scenes was traded-off, however, against measurement bias. Most metrics performed most accurately with SPD-MTFs derived from dead leaves signals. Implementing the cCSF or cVPF did not increase metric accuracy. The log NEQ and Visual log NEQ metrics proposed in this thesis were highly competitive, outperforming metrics of the same genre. They were also more consistent than the IEEE P1858 Camera Phone Image Quality (CPIQ) metric when their input parameters were modified. The advantages and limitations of all performance measures and metrics were discussed, as well as their practical implementation and relevant applications

    DEPENDABILITY IN CLOUD COMPUTING

    Get PDF
    The technological advances and success of Service-Oriented Architectures and the Cloud computing paradigm have produced a revolution in the Information and Communications Technology (ICT). Today, a wide range of services are provisioned to the users in a flexible and cost-effective manner, thanks to the encapsulation of several technologies with modern business models. These services not only offer high-level software functionalities such as social networks or e-commerce but also middleware tools that simplify application development and low-level data storage, processing, and networking resources. Hence, with the advent of the Cloud computing paradigm, today's ICT allows users to completely outsource their IT infrastructure and benefit significantly from the economies of scale. At the same time, with the widespread use of ICT, the amount of data being generated, stored and processed by private companies, public organizations and individuals is rapidly increasing. The in-house management of data and applications is proving to be highly cost intensive and Cloud computing is becoming the destination of choice for increasing number of users. As a consequence, Cloud computing services are being used to realize a wide range of applications, each having unique dependability and Quality-of-Service (Qos) requirements. For example, a small enterprise may use a Cloud storage service as a simple backup solution, requiring high data availability, while a large government organization may execute a real-time mission-critical application using the Cloud compute service, requiring high levels of dependability (e.g., reliability, availability, security) and performance. Service providers are presently able to offer sufficient resource heterogeneity, but are failing to satisfy users' dependability requirements mainly because the failures and vulnerabilities in Cloud infrastructures are a norm rather than an exception. This thesis provides a comprehensive solution for improving the dependability of Cloud computing -- so that -- users can justifiably trust Cloud computing services for building, deploying and executing their applications. A number of approaches ranging from the use of trustworthy hardware to secure application design has been proposed in the literature. The proposed solution consists of three inter-operable yet independent modules, each designed to improve dependability under different system context and/or use-case. A user can selectively apply either a single module or combine them suitably to improve the dependability of her applications both during design time and runtime. Based on the modules applied, the overall proposed solution can increase dependability at three distinct levels. In the following, we provide a brief description of each module. The first module comprises a set of assurance techniques that validates whether a given service supports a specified dependability property with a given level of assurance, and accordingly, awards it a machine-readable certificate. To achieve this, we define a hierarchy of dependability properties where a property represents the dependability characteristics of the service and its specific configuration. A model of the service is also used to verify the validity of the certificate using runtime monitoring, thus complementing the dynamic nature of the Cloud computing infrastructure and making the certificate usable both at discovery and runtime. This module also extends the service registry to allow users to select services with a set of certified dependability properties, hence offering the basic support required to implement dependable applications. We note that this module directly considers services implemented by service providers and provides awareness tools that allow users to be aware of the QoS offered by potential partner services. We denote this passive technique as the solution that offers first level of dependability in this thesis. Service providers typically implement a standard set of dependability mechanisms that satisfy the basic needs of most users. Since each application has unique dependability requirements, assurance techniques are not always effective, and a pro-active approach to dependability management is also required. The second module of our solution advocates the innovative approach of offering dependability as a service to users' applications and realizes a framework containing all the mechanisms required to achieve this. We note that this approach relieves users from implementing low-level dependability mechanisms and system management procedures during application development and satisfies specific dependability goals of each application. We denote the module offering dependability as a service as the solution that offers second level of dependability in this thesis. The third, and the last, module of our solution concerns secure application execution. This module considers complex applications and presents advanced resource management schemes that deploy applications with improved optimality when compared to the algorithms of the second module. This module improves dependability of a given application by minimizing its exposure to existing vulnerabilities, while being subject to the same dependability policies and resource allocation conditions as in the second module. Our approach to secure application deployment and execution denotes the third level of dependability offered in this thesis. The contributions of this thesis can be summarized as follows.The contributions of this thesis can be summarized as follows. \u2022 With respect to assurance techniques our contributions are: i) de finition of a hierarchy of dependability properties, an approach to service modeling, and a model transformation scheme; ii) de finition of a dependability certifi cation scheme for services; iii) an approach to service selection that considers users' dependability requirements; iv) de finition of a solution to dependability certifi cation of composite services, where the dependability properties of a composite service are calculated on the basis of the dependability certi ficates of component services. \u2022 With respect to off ering dependability as a service our contributions are: i) de finition of a delivery scheme that transparently functions on users' applications and satisfi es their dependability requirements; ii) design of a framework that encapsulates all the components necessary to o er dependability as a service to the users; iii) an approach to translate high level users' requirements to low level dependability mechanisms; iv) formulation of constraints that allow enforcement of deployment conditions inherent to dependability mechanisms and an approach to satisfy such constraints during resource allocation; v) a resource management scheme that masks the a ffect of system changes by adapting the current allocation of the application. \u2022 With respect to security management our contributions are: i) an approach that deploys users' applications in the Cloud infrastructure such that their exposure to vulnerabilities is minimized; ii) an approach to build interruptible elastic algorithms whose optimality improves as the processing time increases, eventually converging to an optimal solution

    International Initiatives towards legal harmonisation in the field of Funds transfers, payments and payment systems - Annotated Bibliography

    Get PDF
    This main sections of this annotated bibliography were prepared by the author between 1998 and 2002. The revisions undertaken until January 2006 are selective and do not claim to be complete. The paper has no official character and merely reflects the views of the author. Its purpose is to foster awareness of efforts of international harmonisation in areas that are of interest to central banks, commercial banks and researchers, with a particular emphasis on issues relating to payments and payment systems. The elaboration of this publication is an ongoing project, and any suggestion with regard to additional initiatives or bibliographic references that may be included in future updated publications is appreciated. An earlier hardcopy version was published in Hadding/Schneider (eds.), Transboundary Payment Transactions in the European Single Market/GrenzĂŒberschreitender Zahlungsverkehr im EuropĂ€ischen Binnenmarkt, Köln : Bundesanzeiger, 1997, p.193-247

    International Initiatives towards legal harmonisation in the field of Funds transfers, payments and payment systems - Annotated Bibliography

    Get PDF
    This main sections of this annotated bibliography were prepared by the author between 1998 and 2002. The revisions undertaken until January 2006 are selective and do not claim to be complete. The paper has no official character and merely reflects the views of the author. Its purpose is to foster awareness of efforts of international harmonisation in areas that are of interest to central banks, commercial banks and researchers, with a particular emphasis on issues relating to payments and payment systems. The elaboration of this publication is an ongoing project, and any suggestion with regard to additional initiatives or bibliographic references that may be included in future updated publications is appreciated. An earlier hardcopy version was published in Hadding/Schneider (eds.), Transboundary Payment Transactions in the European Single Market/GrenzĂŒberschreitender Zahlungsverkehr im EuropĂ€ischen Binnenmarkt, Köln : Bundesanzeiger, 1997, p.193-247

    New statistical disclosure attacks on anonymous communications networks

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informåtica, Departamento de Ingeniería del Software e Inteligencia Artificial, leída el 5-02-2016.El anonimato es una dimensi on de la privacidad en la que una persona se reserva su identidad en las relaciones sociales que mantiene. Desde el punto de vista del area de las comunicaciones electr onicas, el anonimato posibilita mantener oculta la informaci on que pueda conducir a la identi caci on de las partes involucradas en una transacci on. Actualmente, conservar el anonimato en las transacciones de informaci on en red representa uno de los aspectos m as importantes. Con este n se han desarrollado diversas tecnolog as, com unmente denominadas tecnolog as para la mejora de la privacidad. Una de las formas m as populares y sencillas de proteger el anonimato en las comunicaciones entre usuarios son los sistemas de comunicaci on an onima de baja latencia basados en redes de mezcladores. Estos sistemas est an expuestos a una serie de ataques basados en an alisis de tr a co que comprometen la privacidad de las relaciones entre los usuarios participantes en la comunicaci on, esto es, que determinan, en mayor o menor medida, las identidades de emisores y receptores. Entre los diferentes tipos de ataques destacan los basados en la inundaci on de la red con informaci on falsa para obtener patrones en la red de mezcladores, los basados en el control del tiempo, los basados en el contenido de los mensajes, y los conocidos como ataques de intersecci on, que pretenden inferir, a trav es de razonamientos probabil sticos o de optimizaci on, patrones de relaciones entre usuarios a partir de la informaci on recabada en lotes o durante un per odo de tiempo por parte del atacante. Este ultimo tipo de ataque es el objeto de la presente tesis...Anonymity is a privacy dimension related to people's interest in preserving their identity in social relationships. In network communications, anonymity makes it possible to hide information that could compromise the identity of parties involved in transactions. Nowadays, anonymity preservation in network information transactions represents a crucial research eld. In order to address this issue, a number of Privacy Enhancing Technologies have been developed. Low latency communications systems based on networks of mixes are very popular and simple measures to protect anonymity in users communications. These systems are exposed to a series of attacks based on tra c analysis that compromise the privacy of relationships between user participating in communications, leading to determine the identity of sender and receiver in a particular information transaction. Some of the leading attacks types are attacks based on sending dummy tra c to the network, attacks based on time control, attacks that take into account the textual information within the messages, and intersections attacks, that pretend to derive patterns of communications between users using probabilistic reasoning or optimization algorithms. This last type of attack is the subject of the present work. Intersection attacks lead to derive statistical estimations of the communications patterns (mean number of sent messages between a pair of users, probability of relationship between users, etc). These models were named Statistical Disclosure Attacks, and were soon considered able to compromise seriously the anonymity of networks based on mixes. Nevertheless, the hypotheses assumed in the rst publications for the concrete development of the attacks were excessively demanding and unreal. It was common to suppose that messages were sent with uniform probability to the receivers, to assume the knowledge of the number of friends an user has or the knowledge a priori of some network parameters, supposing similar behavior between users, etc...Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformåticaTRUEunpu

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
    corecore