16 research outputs found

    Identity-as-a-Service: An Adaptive Security Infrastructure and Privacy-Preserving User Identity for the Cloud Environment

    Get PDF
    In recent years, enterprise applications have begun to migrate from a local hosting to a cloud provider and may have established a business-to-business relationship with each other manually. Adaptation of existing applications requires substantial implementation changes in individual architectural components. On the other hand, users may store their Personal Identifiable Information (PII) in the cloud environment so that cloud services may access and use it on demand. Even if cloud services specify their privacy policies, we cannot guarantee that they follow their policies and will not (accidentally) transfer PII to another party. In this paper, we present Identity-as-a-Service (IDaaS) as a trusted Identity and Access Management with two requirements: Firstly, IDaaS adapts trust between cloud services on demand. We move the trust relationship and identity propagation out of the application implementation and model them as a security topology. When the business comes up with a new e-commerce scenario, IDaaS uses the security topology to adapt a platform-specific security infrastructure for the given business scenario at runtime. Secondly, we protect the confidentiality of PII in federated security domains. We propose our Purpose-based Encryption to protect the disclosure of PII from intermediary entities in a business transaction and from untrusted hosts. Our solution is compliant with the General Data Protection Regulation and involves the least user interaction to prevent identity theft via the human link. The implementation can be easily adapted to existing Identity Management systems, and the performance is fast.</jats:p

    Twenty years of rewriting logic

    Get PDF
    AbstractRewriting logic is a simple computational logic that can naturally express both concurrent computation and logical deduction with great generality. This paper provides a gentle, intuitive introduction to its main ideas, as well as a survey of the work that many researchers have carried out over the last twenty years in advancing: (i) its foundations; (ii) its semantic framework and logical framework uses; (iii) its language implementations and its formal tools; and (iv) its many applications to automated deduction, software and hardware specification and verification, security, real-time and cyber-physical systems, probabilistic systems, bioinformatics and chemical systems

    Advances in Manipulation and Recognition of Digital Ink

    Get PDF
    Handwriting is one of the most natural ways for a human to record knowledge. Recently, this type of human-computer interaction has received increasing attention due to the rapid evolution of touch-based hardware and software. While hardware support for digital ink reached its maturity, algorithms for recognition of handwriting in certain domains, including mathematics, are lacking robustness. Simultaneously, users may possess several pen-based devices and sharing of training data in adaptive recognition setting can be challenging. In addition, resolution of pen-based devices keeps improving making the ink cumbersome to process and store. This thesis develops several advances for efficient processing, storage and recognition of handwriting, which are applicable to the classification methods based on functional approximation. In particular, we propose improvements to classification of isolated characters and groups of rotated characters, as well as symbols of substantially different size. We then develop an algorithm for adaptive classification of handwritten mathematical characters of a user. The adaptive algorithm can be especially useful in the cloud-based recognition framework, which is described further in the thesis. We investigate whether the training data available in the cloud can be useful to a new writer during the training phase by extracting styles of individuals with similar handwriting and recommending styles to the writer. We also perform factorial analysis of the algorithm for recognition of n-grams of rotated characters. Finally, we show a fast method for compression of linear pieces of handwritten strokes and compare it with an enhanced version of the algorithm based on functional approximation of strokes. Experimental results demonstrate validity of the theoretical contributions, which form a solid foundation for the next generation handwriting recognition systems

    Theoretical issues in Numerical Relativity simulations

    Get PDF
    In this thesis we address several analytical and numerical problems related with the general relativistic study of black hole space-times and boson stars. We have developed a new centered finite volume method based on the flux splitting approach. The techniques for dealing with the singularity, steep gradients and apparent horizon location, are studied in the context of a single Schwarzschild black hole, in both spherically symmetric and full 3D simulations. We present an extended study of gauge instabilities related with a class of singularity avoiding slicing conditions and show that, contrary to previous claims, these instabilities are not generic for evolved gauge conditions. We developed an alternative to the current space coordinate conditions, based on a generalized Almost Killing Equation. We performed a general relativistic study regarding the long term stability of Mixed-State Boson Stars configurations and showed that they are suitable candidates for dark matter models.En esta tesis abordamos varios problemas analíticos y numéricos relacionados con el estudio de agujeros negros relativistas y modelos de materia oscura. Hemos desarrollado un nuevo método de volúmenes finitos centrados basado en el enfoque de la división de flujo. Discutimos las técnicas para tratar con la singularidad, los gradientes abruptos y la localización del horizonte aparente en el contexto de un solo agujero negro de Schwarzschild, en simulaciones tanto con simetría esférica como completamente tridimensionales. Hemos extendido el estudio de una familia de condiciones de foliaciones evitadoras de singularidad y mostrado que ciertas inestabilidades no son genéricas para condiciones de gauge dinámicas. Desarrollamos una alternativa a las prescripciones actuales basada en una Almost Killing Equation generalizada. Hemos realizado también un estudio con respecto a la estabilidad a largo plazo de configuraciones de Mixed-State Boson Stars, el cual sugiere que estas podrían ser candidatas apropiadas para modelos de materia oscura

    Abordagens livres de segmentação para reconhecimento automático de cadeias numéricas manuscritas utilizando aprendizado profundo

    Get PDF
    Orientador: Prof Dr. Luiz Eduardo Soares de OliveiraCoorientador: Prof. Dr. Robert SabourinTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Tecnologia. Defesa : Curitiba, 12/03/2019Inclui referências: p.83-90Resumo: Nas ultimas decadas, o reconhecimento de cadeias numericas manuscritas tem sido abordado de maneira similar por varios autores, no que se refere ao tratamento de digitos conectados. A necessidade de segmentar esses componentes e um consenso. Dessa forma, as propostas se concentram em determinar os pontos de segmentacao aplicando heuristicas sobre caracteristicas extraidas do objeto, plano de fundo, contorno, entre outras. No entanto, a producao de digitos fragmentados, ocasionando a sobre-segmentacao da cadeia, e um problema comum entre essas abordagem. Assim, as metologias sao categorizadas pela forma como manipulam os componentes resultantes desse processo: (a) Naquelas que produzem apenas uma segmentacao possivel, ou (b) naquelas que definem um conjunto de hipoteses de segmentacao, alem de um metodo de fusao para determinar a hipotese mais provavel. Apesar da segunda categoria apresentar taxas de reconhecimento mais elevadas, o custo computacional torna-se um aspecto desfavoravel, devido as recorrentes consultas ao classificador pelas inumeras hipoteses produzidas. Alem disso, a variabilidade dos tipos de conexao entre os digitos e a falta de contexto, como a informacao sobre a quantidade de digitos, denotam a limitacao de abordagens baseadas em processos heuristicos. Visando evitar estes problemas, evidenciamos ser possivel superar os metodos tradicionais implementando modelos baseados em aprendizado profundo para classificar digitos conectados diretamente, reduzindo a etapa de segmentacao a um processo de deteccao de componente conexo. Alem disso, aproveitando os avancos na area de deteccao de objetos, apresentamos uma nova abordagem para o problema, na qual, digitos passam a ser compreendidos como objetos em uma imagem e neste cenario, uma sequencia de digitos e uma sequencia de objetos. Para validar nossas hipoteses, experimentos realizados em bases de conhecimento geral avaliaram nossas propostas com os trabalhos presentes na literatura em termos de reconhecimento, correta segmentacao e custo computacional. Os resultados atingiram taxas de reconhecimento em torno 97% quando aplicado a uma base de duplas de digitos conectados e 95% para as amostras de cadeias da base NIST SD19, superando os niveis do estado da arte. Alem das altas taxas de reconhecimento, tambem houve significativa reducao de consultas ao classificador (custo computacional), principalmente em casos complexos, superando o desempenho dos trabalhos presentes no estado da arte, denotando o potencial das abordagens propostas.Abstract: Over the last decades, the recognition of handwritten digit strings has been approached in a similar way by several authors, regarding the connected digits issue. The segmentation of these components is a consensus. In this way, the approaches attempt to determining the segmentation points by applying heuristics on features extracted from the object, background, contour, etc. However, the production of fragmented digits, causing the over-segmentation of the string is a common problem among these approaches. Thus, the methodologies are categorized by the way they manipulate the components resulting from this process: (a) those ones that produce only a possible segmentation, or (b) those ones that define a set of segmentation hypotheses and a fusion method to determine the best hypothesis. Although the second category has higher recognition rates, the computational cost becomes an unfavorable aspect, due to the recurrent classifier calls to classify the hypotheses produced. Therefore, the variability of the connection types and the lack of context, such as the number of digits present in the string, denote the limitation of approaches based on heuristic processes. In order to avoid these problems, we believe that is possible to overcome traditional methods by implementing models based on deep learning to classify connected digits directly, reducing the segmentation step to a connected component detection process. In addition, taking advantage of advances of object detection field, we propose a new approach to the problem, in which, digits are understood as objects in an image and in this scenario, a sequence of digits is a sequence of objects. To validate our hypotheses, experiments were carried out in well-known datasets, evaluating our proposals against state-of-art in terms of recognition, correct segmentation and computational cost. The results achieved recognition rates of 97% when applied to a base of connected digit pairs, and 95% for the NIST SD19 samples, surpassing state-of-art levels. Besides the high recognition rates, it has a significant reduction in terms of classifier calls (computational cost), especially in complex cases, surpassing the performance of the works present in the state of the art, denoting the potential of the proposed approaches

    WSN based sensing model for smart crowd movement with identification: a conceptual model

    Get PDF
    With the advancement of IT and increase in world population rate, Crowd Management (CM) has become a subject undergoing intense study among researchers. Technology provides fast and easily available means of transport and, up-to-date information access to the people that causes crowd at public places. This imposes a big challenge for crowd safety and security at public places such as airports, railway stations and check points. For example, the crowd of pilgrims during Hajj and Ummrah while crossing the borders of Makkah, Kingdom of Saudi Arabia. To minimize the risk of such crowd safety and security identification and verification of people is necessary which causes unwanted increment in processing time. It is observed that managing crowd during specific time period (Hajj and Ummrah) with identification and verification is a challenge. At present, many advanced technologies such as Internet of Things (IoT) are being used to solve the crowed management problem with minimal processing time. In this paper, we have presented a Wireless Sensor Network (WSN) based conceptual model for smart crowd movement with minimal processing time for people identification. This handles the crowd by forming groups and provides proactive support to handle them in organized manner. As a result, crowd can be managed to move safely from one place to another with group identification. The group identification minimizes the processing time and move the crowd in smart way

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    Secure Session Framework: An Identity-based Cryptographic Key Agreement and Signature Protocol

    Get PDF
    Die vorliegende Dissertation beschäftigt sich mit der Methode der identitätsbasierten Verschlüsselung. Hierbei wird der Name oder die Identität eines Zielobjekts zum Verschlüsseln der Daten verwendet. Diese Eigenschaft macht diese Methode zu einem passenden Werkzeug für die moderne elektronische Kommunikation, da die dort verwendeten Identitäten oder Endpunktadressen weltweit eindeutig sein müssen. Das in der Arbeit entwickelte identitätsbasierte Schlüsseleinigungsprotokoll bietet Vorteile gegenüber existierenden Verfahren und eröffnet neue Möglichkeiten. Eines der Hauptmerkmale ist die komplette Unabhängigkeit der Schlüsselgeneratoren. Diese Unabhängigkeit ermöglicht es, dass verschiedene Sicherheitsdomänen ihr eigenes System aufsetzen können. Sie sind nicht mehr gezwungen, sich untereinander abzusprechen oder Geheimnisse auszutauschen. Auf Grund der Eigenschaften des Protokolls sind die Systeme trotzdem untereinander kompatibel. Dies bedeutet, dass Anwender einer Sicherheitsdomäne ohne weiteren Aufwand verschlüsselt mit Anwendern einer anderen Sicherheitsdomäne kommunizieren können. Die Unabhängigkeit wurde ebenfalls auf ein Signatur-Protokoll übertragen. Es ermöglicht, dass Benutzer verschiedener Sicherheitsdomänen ein Objekt signieren können, wobei auch der Vorgang des Signierens unabhängig sein kann. Neben dem Protokoll wurde in der Arbeit auch die Analyse von bestehenden Systemen durchgeführt. Es wurden Angriffe auf etablierte Protokolle und Vermutungen gefunden, die aufzeigen, ob oder in welchen Situationen diese nicht verwendet werden sollten. Dabei wurde zum einen eine komplett neue Herangehensweise gefunden, die auf der (Un-)Definiertheit von bestimmten Objekten in diskreten Räumen basiert. Zum anderen wurde die bekannte Analysemethode der Gitterreduktion benutzt und erfolgreich auf neue Bereiche übertragen. Schlussendlich werden in der Arbeit Anwendungsszenarien für das Protokoll vorgestellt, in denen dessen Vorteile besonders relevant sind. Das erste Szenario bezieht sich auf Telefonie, wobei die Telefonnummer einer Zielperson als Schlüssel verwendet. Sowohl GSM-Telefonie als auch VoIP-Telefonie werden in der Arbeit untersucht. Dafür wurden Implementierungen auf einem aktuellen Mobiltelefon durchgeführt und bestehende VoIP-Software erweitert. Das zweite Anwendungsbeispielsind IP-Netzwerke. Auch die Benutzung der IP-Adresse eines Rechners als Schlüssel ist ein gutes Beispiel, jedoch treten hier mehr Schwierigkeiten auf als bei der Telefonie. Es gibt beispielsweise dynamische IP-Adressen oder die Methode der textit{Network Address Translation}, bei der die IP-Adresse ersetzt wird. Diese und weitere Probleme wurden identifiziert und jeweils Lösungen erarbeitet

    A unified approach to the development and usage of mobile agents

    Get PDF
    Mobile agents are an interesting approach to the development of distributed systems. By moving freely accross the network, they allow for the distribution of computation as well as gathering and filtering of information in an autonomous way. Over the last decade, the agent research community has decidedly achieved tremendous results. However, the community was not able to provide easy to use toolkits to make this paradigm available to a broader audience. By embracing simplicity during the creation of a formal model and a reference implementation to create and execute instances of that model, our aim is to enable a wide audience – even non-experts – to create, adapt and use mobile agents. The proposed model allows for the creation of agents by combining atomic, self-contained building blocks and we provide an approachable, easy to use graphical editor for the creation of model instances. In two evaluations, we could reinforce our believes that, with the achieved results, we could reach our aims

    Toward an Understanding of Software Code Cloning as a Development Practice

    Get PDF
    Code cloning is the practice of duplicating existing source code for use elsewhere within a software system. Within the research community, conventional wisdom has asserted that code cloning is generally a bad practice, and that code clones should be removed or refactored where possible. While there is significant anecdotal evidence that code cloning can lead to a variety of maintenance headaches --- such as code bloat, duplication of bugs, and inconsistent bug fixing --- there has been little empirical study on the frequency, severity, and costs of code cloning with respect to software maintenance. This dissertation seeks to improve our understanding of code cloning as a common development practice through the study of several widely adopted, medium-sized open source software systems. We have explored the motivations behind the use of code cloning as a development practice by addressing several fundamental questions: For what reasons do developers choose to clone code? Are there distinct identifiable patterns of cloning? What are the possible short- and long-term term risks of cloning? What management strategies are appropriate for the maintenance and evolution of clones? When is the ``cure'' (refactoring) likely to cause more harm than the ``disease'' (cloning)? There are three major research contributions of this dissertation. First, we propose a set of requirements for an effective clone analysis tool based on our experiences in clone analysis of large software systems. These requirements are demonstrated in an example implementation which we used to perform the case studies prior to and included in this thesis. Second, we present an annotated catalogue of common code cloning patterns that we observed in our studies. Third, we present an empirical study of the relative frequencies and likely harmfulness of instances of these cloning patterns as observed in two medium-sized open source software systems, the Apache web server and the Gnumeric spreadsheet application. In summary, it appears that code cloning is often used as a principled engineering technique for a variety of reasons, and that as many as 71% of the clones in our study could be considered to have a positive impact on the maintainability of the software system. These results suggest that the conventional wisdom that code clones are generally harmful to the quality of a software system has been proven wrong
    corecore