49 research outputs found

    Wiktionary Matcher

    Get PDF
    In this paper, we introduce Wiktionary Matcher, an ontology matching tool that exploits Wiktionary as external background knowledge source. Wiktionary is a large lexical knowledge resource that is collaboratively built online. Multiple current language versions of Wiktionary are merged and used for monolingual ontology matching by exploiting synonymy relations and for multilingual matching by exploiting the translations given in the resource. We show that Wiktionary can be used as external background knowledge source for the task of ontology matching with reasonable matching and runtime performance

    Mixed Criticality on Multi-cores Accounting for Resource Stress and Resource Sensitivity

    Get PDF
    The most significant trend in real-time systems design in recent years has been the adoption of multi-core processors and the accompanying integration of functionality with different criticality levels onto the same hardware platform. This paper integrates mixed criticality aspects and assurances within a multi-core system model. It bounds cross-core contention and interference by considering the impact on task execution times due to the stress on shared hardware resources caused by co-runners, and each task’s sensitivity to that resource stress. Schedulability analysis is derived for four mixed criticality scheduling schemes based on partitioned fixed priority preemptive scheduling. Each scheme provides robust timing guarantees for high criticality tasks, ensuring that their timing constraints cannot be jeopardized by the behavior or misbehavior of low criticality tasks

    Nonce-based Kerberos is a Secure Delegated AKE Protocol

    Get PDF
    Kerberos is one of the most important cryptographic protocols, first because it is the basisc authentication protocol in Microsoft\u27s Active Directory and shipped with every major operating system, and second because it served as a model for all Single-Sign-On protocols (e.g. SAML, OpenID, MS Cardspace, OpenID Connect). Its security has been confirmed with several Dolev-Yao style proofs, and attacks on certain versions of the protocol have been described. However despite its importance, despite its longevity, and despite the wealth of Dolev-Yao-style security proofs, no reduction based security proof has been published until now. This has two reasons: (1) All widely accepted formal models either deal with two-party protocols, or group key agreement protocols (where all entities have the same role), but not with 3-party protocols where each party has a different role. (2) Kerberos uses timestamps and nonces, and formal security models for timestamps are not well understood up to now. As a step towards a full security proof of Kerberos, we target problem (1) here: We propose a variant of the Kerberos protocol, where nonces are used instead of timestamps. This requires one additional protocol message, but enables a proof in the standard Bellare-Rogaway (BR) model. The key setup and the roles of the different parties are identical to the original Kerberos protocol. For our proof, we only require that the authenticated encryption and the message authentication code (MAC) schemes are secure. Under these assumptions we show that the probability that a client or server process oracle accepts maliciously, and the advantage of an adversary trying to distinguish a real Kerberos session key from a random value, are both negligible. One main idea in the proof is to model the Kerberos server a a public oracle, so that we do not have to consider the security of the connection client--Kerberos. This idea is only applicable to the communication pattern adapted by Kerberos, and not to other 3-party patterns (e.g. EAP protocols)

    FIN-DM: finantsteenuste andmekaeve protsessi mudel

    Get PDF
    Andmekaeve hĂ”lmab reeglite kogumit, protsesse ja algoritme, mis vĂ”imaldavad ettevĂ”tetel iga pĂ€ev kogutud andmetest rakendatavaid teadmisi ammutades suurendada tulusid, vĂ€hendada kulusid, optimeerida tooteid ja kliendisuhteid ning saavutada teisi eesmĂ€rke. Andmekaeves ja -analĂŒĂŒtikas on vaja hĂ€sti mÀÀratletud metoodikat ja protsesse. Saadaval on mitu andmekaeve ja -analĂŒĂŒtika standardset protsessimudelit. KĂ”ige mĂ€rkimisvÀÀrsem ja laialdaselt kasutusele vĂ”etud standardmudel on CRISP-DM. Tegu on tegevusalast sĂ”ltumatu protsessimudeliga, mida kohandatakse sageli sektorite erinĂ”uetega. CRISP-DMi tegevusalast lĂ€htuvaid kohandusi on pakutud mitmes valdkonnas, kaasa arvatud meditsiini-, haridus-, tööstus-, tarkvaraarendus- ja logistikavaldkonnas. Seni pole aga mudelit kohandatud finantsteenuste sektoris, millel on omad valdkonnapĂ”hised erinĂ”uded. Doktoritöös kĂ€sitletakse seda lĂŒnka finantsteenuste sektoripĂ”hise andmekaeveprotsessi (FIN-DM) kavandamise, arendamise ja hindamise kaudu. Samuti uuritakse, kuidas kasutatakse andmekaeve standardprotsesse eri tegevussektorites ja finantsteenustes. Uurimise kĂ€igus tuvastati mitu tavapĂ€rase raamistiku kohandamise stsenaariumit. Lisaks ilmnes, et need meetodid ei keskendu piisavalt sellele, kuidas muuta andmekaevemudelid tarkvaratoodeteks, mida saab integreerida organisatsioonide IT-arhitektuuri ja Ă€riprotsessi. Peamised finantsteenuste valdkonnas tuvastatud kohandamisstsenaariumid olid seotud andmekaeve tehnoloogiakesksete (skaleeritavus), Ă€rikesksete (tegutsemisvĂ”ime) ja inimkesksete (diskrimineeriva mĂ”ju leevendus) aspektidega. SeejĂ€rel korraldati tegelikus finantsteenuste organisatsioonis juhtumiuuring, mis paljastas 18 tajutavat puudujÀÀki CRISP- DMi protsessis. Uuringu andmete ja tulemuste abil esitatakse doktoritöös finantsvaldkonnale kohandatud CRISP-DM nimega FIN-DM ehk finantssektori andmekaeve protsess (Financial Industry Process for Data Mining). FIN-DM laiendab CRISP-DMi nii, et see toetab privaatsust sĂ€ilitavat andmekaevet, ohjab tehisintellekti eetilisi ohte, tĂ€idab riskijuhtimisnĂ”udeid ja hĂ”lmab kvaliteedi tagamist kui osa andmekaeve elutsĂŒklisData mining is a set of rules, processes, and algorithms that allow companies to increase revenues, reduce costs, optimize products and customer relationships, and achieve other business goals, by extracting actionable insights from the data they collect on a day-to-day basis. Data mining and analytics projects require well-defined methodology and processes. Several standard process models for conducting data mining and analytics projects are available. Among them, the most notable and widely adopted standard model is CRISP-DM. It is industry-agnostic and often is adapted to meet sector-specific requirements. Industry- specific adaptations of CRISP-DM have been proposed across several domains, including healthcare, education, industrial and software engineering, logistics, etc. However, until now, there is no existing adaptation of CRISP-DM for the financial services industry, which has its own set of domain-specific requirements. This PhD Thesis addresses this gap by designing, developing, and evaluating a sector-specific data mining process for financial services (FIN-DM). The PhD thesis investigates how standard data mining processes are used across various industry sectors and in financial services. The examination identified number of adaptations scenarios of traditional frameworks. It also suggested that these approaches do not pay sufficient attention to turning data mining models into software products integrated into the organizations' IT architectures and business processes. In the financial services domain, the main discovered adaptation scenarios concerned technology-centric aspects (scalability), business-centric aspects (actionability), and human-centric aspects (mitigating discriminatory effects) of data mining. Next, an examination by means of a case study in the actual financial services organization revealed 18 perceived gaps in the CRISP-DM process. Using the data and results from these studies, the PhD thesis outlines an adaptation of CRISP-DM for the financial sector, named the Financial Industry Process for Data Mining (FIN-DM). FIN-DM extends CRISP-DM to support privacy-compliant data mining, to tackle AI ethics risks, to fulfill risk management requirements, and to embed quality assurance as part of the data mining life-cyclehttps://www.ester.ee/record=b547227

    On the Dual Uses of Science and Ethics Principles, Practices, and Prospects

    Get PDF
    Ethics, humanity, techonolog

    Workload Interleaving with Performance Guarantees in Data Centers

    Get PDF
    In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purposes of reducing energy and operating costs, and of improving availability and reliability. Along with the above benefits, resource sharing also introduces performance challenges: when multiple workloads access the same resources concurrently, contention may occur and introduce delays in the performance of individual workloads. Providing performance isolation to individual workloads needs effective management methodologies. The challenges of deriving effective management methodologies lie in finding accurate, robust, compact metrics and models to drive algorithms that can meet different performance objectives while achieving efficient utilization of resources. This dissertation proposes a set of methodologies aiming at solving the challenging performance isolation problem in workload interleaving in data centers, focusing on both storage components and computing components. at the storage node level, we focus on methodologies for better interleaving user traffic with background workloads, such as tasks for improving reliability, availability, and power savings. More specifically, a scheduling policy for background workload based on the statistical characteristics of the system busy periods and a methodology that quantitatively estimates the performance impact of power savings are developed. at the storage cluster level, we consider methodologies on how to efficiently conduct work consolidation and schedule asynchronous updates without violating user performance targets. More specifically, we develop a framework that can estimate beforehand the benefits and overheads of each option in order to automate the process of reaching intelligent consolidation decisions while achieving faster eventual consistency. at the computing node level, we focus on improving workload interleaving at off-the-shelf servers as they are the basic building blocks of large-scale data centers. We develop priority scheduling middleware that employs different policies to schedule background tasks based on the instantaneous resource requirements of the high priority applications running on the server node. Finally, at the computing cluster level, we investigate popular computing frameworks for large-scale data intensive distributed processing, such as MapReduce and its Hadoop implementation. We develop a new Hadoop scheduler called DyScale to exploit capabilities offered by heterogeneous cores in order to achieve a variety of performance objectives

    Trusted computing or trust in computing? Legislating for trust networks

    Get PDF
    This thesis aims to address several issues emerging in the new digital world. Using Trusted Computing as the paradigmatic example of regulation though code that tries to address the cyber security problem that occurs, where the freedom of the user to reconfigure her machine is restricted in exchange for greater, yet not perfect, security. Trusted Computing is a technology that while it aims to protect the user, and the integrity of her machine and her privacy against third party users, it discloses more of her information to trusted third parties, exposing her to security risks in case of compromising occurring to that third party. It also intends to create a decentralized, bottom up solution to security where security follows along the arcs of an emergent “network of trust”, and if that was viable, to achieve a form of code based regulation. Through the analysis attempted in this thesis, we laid the groundwork for a refined assessment, considering the problems that Trusted Computing Initiative (TCI) faces and that are based in the intentional, systematic but sometimes misunderstood and miscommunicated difference (which as we reveal results directly in certain design choices for TC) between the conception of trust in informatics (“techno-trust”) and the common sociological concept of it. To reap the benefits of TCI and create the dynamic “network of trust”, we need the sociological concept of trust sharing the fundamental characteristics of transitivity and holism which are absent from techno-trust. This gives rise to our next visited problems which are: if TC shifts the power from the customer to the TC provider, who takes on roles previously reserved for the nation state, then how in a democratic state can users trust those that make the rules? The answer lies partly in constitutional and human rights law and we drill into those functions of TC that makes the TCI provider comparable to state-like and ask what minimal legal guarantees need to be in place to accept, trustingly, this shift of power. Secondly, traditional liberal contract law reduces complex social relations to binary exchange relations, which are not transitive and disrupt rather than create networks. Contract law, as we argue, plays a central role for the way in which the TC provider interacts with his customers and this thesis contributes in speculating of a contract law that does not result in atomism, rather “brings in” potentially affected third parties and results in holistic networks. In the same vein, this thesis looks mainly at specific ways in which law can correct or redefine the implicit and democratically not validated shift of power from customer to TC providers while enhancing the social environment and its social trust within which TC must operate

    Resource Description and Selection for Similarity Search in Metric Spaces: Problems and Problem-Solving Approaches

    Get PDF
    In times of an ever increasing amount of data and a growing diversity of data types in different application contexts, there is a strong need for large-scale and ïŹ‚exible indexing and search techniques. Metric access methods (MAMs) provide this flexibility, because they only assume that the dissimilarity between two data objects is modeled by a distance metric. Furthermore, scalable solutions can be built with the help of distributed MAMs. Both IF4MI and RS4MI, which are presented in this thesis, represent metric access methods. IF4MI belongs to the group of centralized MAMs. It is based on an inverted file and thus offers a hybrid access method providing text retrieval capabilities in addition to content-based search in arbitrary metric spaces. In opposition to IF4MI, RS4MI is a distributed MAM based on resource description and selection techniques. Here, data objects are physically distributed. However, RS4MI is by no means restricted to a certain type of distributed information retrieval system. Various application ïŹelds for the resource description and selection techniques are possible, for example in the context of visual analytics. Due to the metric space assumption, possible application fields go far beyond content-based image retrieval applications which provide the example scenario here.StĂ€ndig zunehmende Datenmengen und eine immer grĂ¶ĂŸer werdende Vielfalt an Datentypen in verschiedenen Anwendungskontexten erfordern sowohl skalierbare als auch flexible Indexierungs- und Suchtechniken. Metrische Zugriffsstrukturen (MAMs: metric access methods) können diese FlexibilitĂ€t bieten, weil sie lediglich unterstellen, dass die Distanz zwischen zwei Datenobjekten durch eine Distanzmetrik modelliert wird. DarĂŒber hinaus lassen sich skalierbare Lösungen mit Hilfe verteilter MAMs entwickeln. Sowohl IF4MI als auch RS4MI, die beide in dieser Arbeit vorgestellt werden, stellen metrische Zugriffsstrukturen dar. IF4MI gehört zur Gruppe der zentralisierten MAMs. Diese Zugriffsstruktur basiert auf einer invertierten Liste und reprĂ€sentiert daher eine hybride Indexstruktur, die neben einer inhaltsbasierten Ähnlichkeitssuche in beliebigen metrischen RĂ€umen direkt auch Möglichkeiten der Textsuche unterstĂŒtzt. Im Gegensatz zu IF4MI handelt es sich bei RS4MI um eine verteilte MAM, die auf Techniken der Ressourcenbeschreibung und -auswahl beruht. Dabei sind die Datenobjekte physisch verteilt. RS4MI ist jedoch keineswegs auf die Anwendung in einem bestimmten verteilten Information-Retrieval-System beschrĂ€nkt. Verschiedene Anwendungsfelder sind fĂŒr die Techniken zur Ressourcenbeschreibung und -auswahl denkbar, zum Beispiel im Bereich der Visuellen Analyse. Dabei gehen Anwendungsmöglichkeiten weit ĂŒber den fĂŒr die Arbeit unterstellten Anwendungskontext der inhaltsbasierten Bildsuche hinaus

    Proceedings of the European Conference on Agricultural Engineering AgEng2021

    Get PDF
    This proceedings book results from the AgEng2021 Agricultural Engineering Conference under auspices of the European Society of Agricultural Engineers, held in an online format based on the University of Évora, Portugal, from 4 to 8 July 2021. This book contains the full papers of a selection of abstracts that were the base for the oral presentations and posters presented at the conference. Presentations were distributed in eleven thematic areas: Artificial Intelligence, data processing and management; Automation, robotics and sensor technology; Circular Economy; Education and Rural development; Energy and bioenergy; Integrated and sustainable Farming systems; New application technologies and mechanisation; Post-harvest technologies; Smart farming / Precision agriculture; Soil, land and water engineering; Sustainable production in Farm buildings

    July 11, 2009 (Pages 3419-4086)

    Get PDF
    corecore