29 research outputs found

    Hardware-Assisted Secure Computation

    Get PDF
    The theory community has worked on Secure Multiparty Computation (SMC) for more than two decades, and has produced many protocols for many settings. One common thread in these works is that the protocols cannot use a Trusted Third Party (TTP), even though this is conceptually the simplest and most general solution. Thus, current protocols involve only the direct players---we call such protocols self-reliant. They often use blinded boolean circuits, which has several sources of overhead, some due to the circuit representation and some due to the blinding. However, secure coprocessors like the IBM 4758 have actual security properties similar to ideal TTPs. They also have little RAM and a slow CPU.We call such devices Tiny TTPs. The availability of real tiny TTPs opens the door for a different approach to SMC problems. One major challenge with this approach is how to execute large programs on large inputs using the small protected memory of a tiny TTP, while preserving the trust properties that an ideal TTP provides. In this thesis we have investigated the use of real TTPs to help with the solution of SMC problems. We start with the use of such TTPs to solve the Private Information Retrieval (PIR) problem, which is one important instance of SMC. Our implementation utilizes a 4758. The rest of the thesis is targeted at general SMC. Our SMC system, Faerieplay, moves some functionality into a tiny TTP, and thus avoids the blinded circuit overhead. Faerieplay consists of a compiler from high-level code to an arithmetic circuit with special gates for efficient indirect array access, and a virtual machine to execute this circuit on a tiny TTP while maintaining the typical SMC trust properties. We report on Faerieplay\u27s security properties, the specification of its components, and our implementation and experiments. These include comparisons with the Fairplay circuit-based two-party system, and an implementation of the Dijkstra graph shortest path algorithm. We also provide an implementation of an oblivious RAM which supports similar tiny TTP-based SMC functionality but using a standard RAM program. Performance comparisons show Faerieplay\u27s circuit approach to be considerably faster, at the expense of a more constrained programming environment when targeting a circuit

    Usage Policies for Decentralised Information Processing

    Get PDF
    Owners impose usage restrictions on their information, which can be based e.g. on privacy laws, copyright law or social conventions. Often, information is processed in complex constellations without central control. In this work, we introduce technologies to formally express usage restrictions in a machine-interpretable way as so-called policies that enable the creation of decentralised systems that provide, consume and process distributed information in compliance with their usage restrictions

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Demographic change: towards a framework to manage IT- personnel in times of scarcity of talent

    Get PDF
    En un entorno en el que el crecimiento demográfico negativo es una realidad en la mayoría de los estados europeos, las organizaciones deben enfrentar necesidades incrementos de la productividad laboral y una menor disponibilidad de empleados competentes. Uno de los sectores en que la situación expuesta es más evidente es el de las Tencnologías de la Información. Las teconologías de la información son cruciales para casi cualquier organización en cualquier sector y para cualquier persona. En un entorno socioeconómico en continuo cambio las organizaciones y sus departamentos de tecnologías de la información deben asumir los cambios en el mercado y ser capaces de desenvolverse de una forma ágil y con una orientación al cliente sin precedentes. Para las organizaciones, y en particular para sus elementos organizacionales más relacionados con las tecnologías de la información, la productividad de los empleados es un componente clave. De esta forma, la gestión de los recursos humanos, abarcando aspectos como su selección, desarrollo y retención es un aspecto clave para las organizaciones. El reto para las organizaciones es lograr la mejora en el ámbito de los procesos corporativos incluyendo como una parte importante de los mismos la gestión de los recursos humanos. La simbiosis de disciplinas como las tecnologías de la información, la economía, la psicología y la gestión puede lograr el incremento de la lealtad de los empleados. Para los profesionales modernos, los cambios de empleador se consideran dentro de la normalidad hasta encontrar un entorno adecuado que colme sus expectativas y necesidades. Dichas expectativas no se encuentran basadas únicamente en incentivos económicos, por lo que las organizaciones deben anticipar las expectativas y alinear sus estrategias a las expectativas de su fuerza laboral. La temática de este trabajo ha tenido repercusión en la literatura científica, sin embargo, no existe un estudio que identifique los factores que se presentan y determinan la retención de los trabajadores de las tecnologías de la información en los entornos organizacionales. Este es el objetivo de la presente tesis doctoral. Para ello, el primer paso que se pretende dar es concretar los aspectos organizacionales que son relevantes para el estudio del fenómeno. A partir de esta identificación, el autor diseña un marco en el que las partes identificadas se encuentran conectadas. El citado marco de trabajo presenta cinco niveles. Estos cinco niveles son: los salarios, la educación y capacidad de fuerza laboral, salud psicológica, salud fisiológica y balance de la vida laboral y profesional. Adicionalmente, el marco de trabajo presenta una aproximación jerárquica. Cada nivel presenta diferentes factores y métricas para definir y medir la situación organizacional ofreciendo oportunidades de derivar medidas para mejorar la situación. El marco de trabajo presenta 22 factores y 44 métricas. Adicionalmente, se ha desarrollado un modelo de implantación para el método propuesto. Con vistas a refinar el marco de trabajo y su modelo de implantación, se han llevado a cabo pruebas cualitativas y cuantitativas en el seno de un departamento de tecnologías de la in-formación perteneciente a una organización dedicada a los servicios financieros en Alemania. Se formularon y respondieron diversas preguntas de investigación en relación a ámbitos como el cambio demográfico, el estrés y los factores para el rendimiento laboral. Los resultados demuestran que el estrés está determinado por diferentes factores y que la mayoría de ellos deben ser tomados en consideración en la asignación de tareas y en el diseño de los entornos de trabajo. De la misma forma, se presentan diversos factores que incrementan la productividad laboral. Algunos de ellos como la conciliación de la vida laboral y la personal, la cultura organizacional o el salario deben ser tomados en consideración en las estrategias de gestión de recursos humanos en ámbitos organizacionales. Una estrategia de gestión de recursos humanos debe incluir adicionalmente aspectos relativos al reclutamiento, teniendo en cuenta la complentaridad con los factores anteriormente expuestos. Los resultados obtenidos también revelan que no existen evidencias de diferencias de género o de edad en la importancia de los factores de productividad o en los factores de estrés.Due to an unsatisfying demographic development in most European states, companies have to solve a trade-off between a needed increase of productivity on the one hand and fewer highly skilled employees on the other hand. One of the first sectors that will be influenced by this development is the Information Technology-industry (IT). Information technology is crucial for every company in every industry and for the people itself. In a permanently changing business environment, companies and especially their IT-departments must adapt to changes in the market and be more agile and customer oriented than ever before. To succeed in the IT sector, the productivity of employees is the key comonent. Therefore, the allocation and retention of these scarce resources in the best possible way is even more important. The challenge for companies is to improve the enterprise not only on the side of the organizational and process level, but to develop new strategies and approaches in human resource management. Only a symbiosis of the disciplines information technology, economics, psychology and management will enable relevant and indispensable employees to promote loyalty to the company. For a well-trained professional, frequent change of the employer, is as long associated with normality until the employees find the most suitable environment for fulfilling their needs and expectations. These expectations are no longer just based on financial incentives, consequently companies need to anticipate these expectations and align their strategies to them. Although the topic is quite popular in scientific literature, there is not a study devoted to identify these factors in organizational contexts. This Thesis is aimed to bridge this gap. The first step to achieve this goal is creating transparency over all parts of an organisation that are relevant to this topic. The author created a method that connects these relevant parts in one holistic framework. The framework consists of five layers. These layers are baseline wages, education and employee pool, psychological healthiness, physiological healthiness and work live balance. Also, the framework follows a hierarchical approach. Every layer has distinct factors and metrics to define and measure the status of the company and offers opportunities to derive measures to improve this situation. In total the framework consist of 22 factors and 44 metrics. Besides the framework, the author developed an implementation model for the proposed method. To refine the developed framework and implementation model, qualitative and quantitative tests were conductedn the IT-department of a financial service company in Germany. X Several research questions regarding demographic change, psychological stress and factors for employee performance were analysed and answered. The results show, that stress is influenced by several different stressors and the most of them need to be considered by companies when they allocate work or design workspaces. On the other side, there are several factors that promote employee productivity. Some of them, like work-life balance, company culture or salary are more important and should be a relevant part of every human resource management (HRM) strategy. A HRM strategy should involve proper measures for the recruiting and the development of employees because they complement each other and should be considered with the same importance. The results also show, that there is no evidence suggesting an age or gender related difference of the importance or the impact of productivity factors or psychological stressors.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Antonio de Amescua Seco.- Secretario: Edmundo Tov0ar Carlo.- Vocal: Cristina Casado Lumbrera

    Wide spectrum attribution: Using deception for attribution intelligence in cyber attacks

    Get PDF
    Modern cyber attacks have evolved considerably. The skill level required to conduct a cyber attack is low. Computing power is cheap, targets are diverse and plentiful. Point-and-click crimeware kits are widely circulated in the underground economy, while source code for sophisticated malware such as Stuxnet is available for all to download and repurpose. Despite decades of research into defensive techniques, such as firewalls, intrusion detection systems, anti-virus, code auditing, etc, the quantity of successful cyber attacks continues to increase, as does the number of vulnerabilities identified. Measures to identify perpetrators, known as attribution, have existed for as long as there have been cyber attacks. The most actively researched technical attribution techniques involve the marking and logging of network packets. These techniques are performed by network devices along the packet journey, which most often requires modification of existing router hardware and/or software, or the inclusion of additional devices. These modifications require wide-scale infrastructure changes that are not only complex and costly, but invoke legal, ethical and governance issues. The usefulness of these techniques is also often questioned, as attack actors use multiple stepping stones, often innocent systems that have been compromised, to mask the true source. As such, this thesis identifies that no publicly known previous work has been deployed on a wide-scale basis in the Internet infrastructure. This research investigates the use of an often overlooked tool for attribution: cyber de- ception. The main contribution of this work is a significant advancement in the field of deception and honeypots as technical attribution techniques. Specifically, the design and implementation of two novel honeypot approaches; i) Deception Inside Credential Engine (DICE), that uses policy and honeytokens to identify adversaries returning from different origins and ii) Adaptive Honeynet Framework (AHFW), an introspection and adaptive honeynet framework that uses actor-dependent triggers to modify the honeynet envi- ronment, to engage the adversary, increasing the quantity and diversity of interactions. The two approaches are based on a systematic review of the technical attribution litera- ture that was used to derive a set of requirements for honeypots as technical attribution techniques. Both approaches lead the way for further research in this field

    Enhancing Privacy and Fairness in Search Systems

    Get PDF
    Following a period of expedited progress in the capabilities of digital systems, the society begins to realize that systems designed to assist people in various tasks can also harm individuals and society. Mediating access to information and explicitly or implicitly ranking people in increasingly many applications, search systems have a substantial potential to contribute to such unwanted outcomes. Since they collect vast amounts of data about both searchers and search subjects, they have the potential to violate the privacy of both of these groups of users. Moreover, in applications where rankings influence people's economic livelihood outside of the platform, such as sharing economy or hiring support websites, search engines have an immense economic power over their users in that they control user exposure in ranked results. This thesis develops new models and methods broadly covering different aspects of privacy and fairness in search systems for both searchers and search subjects. Specifically, it makes the following contributions: (1) We propose a model for computing individually fair rankings where search subjects get exposure proportional to their relevance. The exposure is amortized over time using constrained optimization to overcome searcher attention biases while preserving ranking utility. (2) We propose a model for computing sensitive search exposure where each subject gets to know the sensitive queries that lead to her profile in the top-k search results. The problem of finding exposing queries is technically modeled as reverse nearest neighbor search, followed by a weekly-supervised learning to rank model ordering the queries by privacy-sensitivity. (3) We propose a model for quantifying privacy risks from textual data in online communities. The method builds on a topic model where each topic is annotated by a crowdsourced sensitivity score, and privacy risks are associated with a user's relevance to sensitive topics. We propose relevance measures capturing different dimensions of user interest in a topic and show how they correlate with human risk perceptions. (4) We propose a model for privacy-preserving personalized search where search queries of different users are split and merged into synthetic profiles. The model mediates the privacy-utility trade-off by keeping semantically coherent fragments of search histories within individual profiles, while trying to minimize the similarity of any of the synthetic profiles to the original user profiles. The models are evaluated using information retrieval techniques and user studies over a variety of datasets, ranging from query logs, through social media and community question answering postings, to item listings from sharing economy platforms.Nach einer Zeit schneller Fortschritte in den Fähigkeiten digitaler Systeme beginnt die Gesellschaft zu erkennen, dass Systeme, die Menschen bei verschiedenen Aufgaben unterstützen sollen, den Einzelnen und die Gesellschaft auch schädigen können. Suchsysteme haben ein erhebliches Potenzial, um zu solchen unerwünschten Ergebnissen beizutragen, weil sie den Zugang zu Informationen vermitteln und explizit oder implizit Menschen in immer mehr Anwendungen in Ranglisten anordnen. Da sie riesige Datenmengen sowohl über Suchende als auch über Gesuchte sammeln, können sie die Privatsphäre dieser beiden Benutzergruppen verletzen. In Anwendungen, in denen Ranglisten einen Einfluss auf den finanziellen Lebensunterhalt der Menschen außerhalb der Plattform haben, z. B. auf Sharing-Economy-Plattformen oder Jobbörsen, haben Suchmaschinen eine immense wirtschaftliche Macht über ihre Nutzer, indem sie die Sichtbarkeit von Personen in Suchergebnissen kontrollieren. In dieser Dissertation werden neue Modelle und Methoden entwickelt, die verschiedene Aspekte der Privatsphäre und der Fairness in Suchsystemen, sowohl für Suchende als auch für Gesuchte, abdecken. Insbesondere leistet die Arbeit folgende Beiträge: (1) Wir schlagen ein Modell für die Berechnung von fairen Rankings vor, bei denen Suchsubjekte entsprechend ihrer Relevanz angezeigt werden. Die Sichtbarkeit wird im Laufe der Zeit durch ein Optimierungsmodell adjustiert, um die Verzerrungen der Sichtbarkeit für Sucher zu kompensieren, während die Nützlichkeit des Rankings beibehalten bleibt. (2) Wir schlagen ein Modell für die Bestimmung kritischer Suchanfragen vor, in dem für jeden Nutzer Aanfragen, die zu seinem Nutzerprofil in den Top-k-Suchergebnissen führen, herausgefunden werden. Das Problem der Berechnung von exponierenden Suchanfragen wird als Reverse-Nearest-Neighbor-Suche modelliert. Solche kritischen Suchanfragen werden dann von einem Learning-to-Rank-Modell geordnet, um die sensitiven Suchanfragen herauszufinden. (3) Wir schlagen ein Modell zur Quantifizierung von Risiken für die Privatsphäre aus Textdaten in Online Communities vor. Die Methode baut auf einem Themenmodell auf, bei dem jedes Thema durch einen Crowdsourcing-Sensitivitätswert annotiert wird. Die Risiko-Scores sind mit der Relevanz eines Benutzers mit kritischen Themen verbunden. Wir schlagen Relevanzmaße vor, die unterschiedliche Dimensionen des Benutzerinteresses an einem Thema erfassen, und wir zeigen, wie diese Maße mit der Risikowahrnehmung von Menschen korrelieren. (4) Wir schlagen ein Modell für personalisierte Suche vor, in dem die Privatsphäre geschützt wird. In dem Modell werden Suchanfragen von Nutzer partitioniert und in synthetische Profile eingefügt. Das Modell erreicht einen guten Kompromiss zwischen der Suchsystemnützlichkeit und der Privatsphäre, indem semantisch kohärente Fragmente der Suchhistorie innerhalb einzelner Profile beibehalten werden, wobei gleichzeitig angestrebt wird, die Ähnlichkeit der synthetischen Profile mit den ursprünglichen Nutzerprofilen zu minimieren. Die Modelle werden mithilfe von Informationssuchtechniken und Nutzerstudien ausgewertet. Wir benutzen eine Vielzahl von Datensätzen, die von Abfrageprotokollen über soziale Medien Postings und die Fragen vom Q&A Forums bis hin zu Artikellistungen von Sharing-Economy-Plattformen reichen

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering, FASE 2021, which took place during March 27–April 1, 2021, and was held as part of the Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg but changed to an online format due to the COVID-19 pandemic. The 16 full papers presented in this volume were carefully reviewed and selected from 52 submissions. The book also contains 4 Test-Comp contributions

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications
    corecore