13,481 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Technical Dimensions of Programming Systems

    Get PDF
    Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art. In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too. We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them. We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions. Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems. Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants

    Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos

    Get PDF
    Data charts are widely used in our daily lives, being present in regular media, such as newspapers, magazines, web pages, books, and many others. A well constructed data chart leads to an intuitive understanding of its underlying data and in the same way, when data charts have wrong design choices, a redesign of these representations might be needed. However, in most cases, these charts are shown as a static image, which means that the original data are not usually available. Therefore, automatic methods could be applied to extract the underlying data from the chart images to allow these changes. The task of recognizing charts and extracting data from them is complex, largely due to the variety of chart types and their visual characteristics. Computer Vision techniques for image classification and object detection are widely used for the problem of recognizing charts, but only in images without any disturbance. Other features in real-world images that can make this task difficult are not present in most literature works, like photo distortions, noise, alignment, etc. Two computer vision techniques that can assist this task and have been little explored in this context are perspective detection and correction. These methods transform a distorted and noisy chart in a clear chart, with its type ready for data extraction or other uses. The task of reconstructing data is straightforward, as long the data is available the visualization can be reconstructed, but the scenario of reconstructing it on the same context is complex. Using a Visualization Grammar for this scenario is a key component, as these grammars usually have extensions for interaction, chart layers, and multiple views without requiring extra development effort. This work presents a model for automated support for custom recognition, and reconstruction of charts in images. The model automatically performs the process steps, such as reverse engineering, turning a static chart back into its data table for later reconstruction, while allowing the user to make modifications in case of uncertainties. This work also features a model-based architecture along with prototypes for various use cases. Validation is performed step by step, with methods inspired by the literature. This work features three use cases providing proof of concept and validation of the model. The first use case features usage of chart recognition methods focused on documents in the real-world, the second use case focus on vocalization of charts, using a visualization grammar to reconstruct a chart in audio format, and the third use case presents an Augmented Reality application that recognizes and reconstructs charts in the same context (a piece of paper) overlaying the new chart and interaction widgets. The results showed that with slight changes, chart recognition and reconstruction methods are now ready for real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando presentes nos meios de comunicação regulares, tais como jornais, revistas, páginas web, livros, e muitos outros. Um gráfico bem construído leva a uma compreensão intuitiva dos seus dados inerentes e da mesma forma, quando os gráficos de dados têm escolhas de conceção erradas, poderá ser necessário um redesenho destas representações. Contudo, na maioria dos casos, estes gráficos são mostrados como uma imagem estática, o que significa que os dados originais não estão normalmente disponíveis. Portanto, poderiam ser aplicados métodos automáticos para extrair os dados inerentes das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande parte devido à variedade de tipos de gráficos e às suas características visuais. As técnicas de Visão Computacional para classificação de imagens e deteção de objetos são amplamente utilizadas para o problema de reconhecimento de gráficos, mas apenas em imagens sem qualquer ruído. Outras características das imagens do mundo real que podem dificultar esta tarefa não estão presentes na maioria das obras literárias, como distorções fotográficas, ruído, alinhamento, etc. Duas técnicas de visão computacional que podem ajudar nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e correção da perspetiva. Estes métodos transformam um gráfico distorcido e ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que os dados estejam disponíveis a visualização pode ser reconstruída, mas o cenário de reconstrução no mesmo contexto é complexo. A utilização de uma Gramática de Visualização para este cenário é um componente chave, uma vez que estas gramáticas têm normalmente extensões para interação, camadas de gráficos, e visões múltiplas sem exigir um esforço extra de desenvolvimento. Este trabalho apresenta um modelo de suporte automatizado para o reconhecimento personalizado, e reconstrução de gráficos em imagens estáticas. O modelo executa automaticamente as etapas do processo, tais como engenharia inversa, transformando um gráfico estático novamente na sua tabela de dados para posterior reconstrução, ao mesmo tempo que permite ao utilizador fazer modificações em caso de incertezas. Este trabalho também apresenta uma arquitetura baseada em modelos, juntamente com protótipos para vários casos de utilização. A validação é efetuada passo a passo, com métodos inspirados na literatura. Este trabalho apresenta três casos de uso, fornecendo prova de conceito e validação do modelo. O primeiro caso de uso apresenta a utilização de métodos de reconhecimento de gráficos focando em documentos no mundo real, o segundo caso de uso centra-se na vocalização de gráficos, utilizando uma gramática de visualização para reconstruir um gráfico em formato áudio, e o terceiro caso de uso apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos gráficos e widgets de interação. Os resultados mostraram que com pequenas alterações, os métodos de reconhecimento e reconstrução dos gráficos estão agora prontos para os gráficos do mundo real, tendo em consideração o tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic

    Modelling and Solving the Single-Airport Slot Allocation Problem

    Get PDF
    Currently, there are about 200 overly congested airports where airport capacity does not suffice to accommodate airline demand. These airports play a critical role in the global air transport system since they concern 40% of global passenger demand and act as a bottleneck for the entire air transport system. This imbalance between airport capacity and airline demand leads to excessive delays, as well as multi-billion economic, and huge environmental and societal costs. Concurrently, the implementation of airport capacity expansion projects requires time, space and is subject to significant resistance from local communities. As a short to medium-term response, Airport Slot Allocation (ASA) has been used as the main demand management mechanism. The main goal of this thesis is to improve ASA decision-making through the proposition of models and algorithms that provide enhanced ASA decision support. In doing so, this thesis is organised into three distinct chapters that shed light on the following questions (I–V), which remain untapped by the existing literature. In parentheses, we identify the chapters of this thesis that relate to each research question. I. How to improve the modelling of airline demand flexibility and the utility that each airline assigns to each available airport slot? (Chapters 2 and 4) II. How can one model the dynamic and endogenous adaptation of the airport’s landside and airside infrastructure to the characteristics of airline demand? (Chapter 2) III. How to consider operational delays in strategic ASA decision-making? (Chapter 3) IV. How to involve the pertinent stakeholders into the ASA decision-making process to select a commonly agreed schedule; and how can one reduce the inherent decision-complexity without compromising the quality and diversity of the schedules presented to the decision-makers? (Chapter 3) V. Given that the ASA process involves airlines (submitting requests for slots) and coordinators (assigning slots to requests based on a set of rules and priorities), how can one jointly consider the interactions between these two sides to improve ASA decision-making? (Chapter 4) With regards to research questions (I) and (II), the thesis proposes a Mixed Integer Programming (MIP) model that considers airlines’ timing flexibility (research question I) and constraints that enable the dynamic and endogenous allocation of the airport’s resources (research question II). The proposed modelling variant addresses several additional problem characteristics and policy rules, and considers multiple efficiency objectives, while integrating all constraints that may affect airport slot scheduling decisions, including the asynchronous use of the different airport resources (runway, aprons, passenger terminal) and the endogenous consideration of the capabilities of the airport’s infrastructure to adapt to the airline demand’s characteristics and the aircraft/flight type associated with each request. The proposed model is integrated into a two-stage solution approach that considers all primary and several secondary policy rules of ASA. New combinatorial results and valid tightening inequalities that facilitate the solution of the problem are proposed and implemented. An extension of the above MIP model that considers the trade-offs among schedule displacement, maximum displacement, and the number of displaced requests, is integrated into a multi-objective solution framework. The proposed framework holistically considers the preferences of all ASA stakeholder groups (research question IV) concerning multiple performance metrics and models the operational delays associated with each airport schedule (research question III). The delays of each schedule/solution are macroscopically estimated, and a subtractive clustering algorithm and a parameter tuning routine reduce the inherent decision complexity by pruning non-dominated solutions without compromising the representativeness of the alternatives offered to the decision-makers (research question IV). Following the determination of the representative set, the expected delay estimates of each schedule are further refined by considering the whole airfield’s operations, the landside, and the airside infrastructure. The representative schedules are ranked based on the preferences of all ASA stakeholder groups concerning each schedule’s displacement-related and operational-delay performance. Finally, in considering the interactions between airlines’ timing flexibility and utility, and the policy-based priorities assigned by the coordinator to each request (research question V), the thesis models the ASA problem as a two-sided matching game and provides guarantees on the stability of the proposed schedules. A Stable Airport Slot Allocation Model (SASAM) capitalises on the flexibility considerations introduced for addressing research question (I) through the exploitation of data submitted by the airlines during the ASA process and provides functions that proxy each request’s value considering both the airlines’ timing flexibility for each submitted request and the requests’ prioritisation by the coordinators when considering the policy rules defining the ASA process. The thesis argues on the compliance of the proposed functions with the primary regulatory requirements of the ASA process and demonstrates their applicability for different types of slot requests. SASAM guarantees stability through sets of inequalities that prune allocations blocking the formation of stable schedules. A multi-objective Deferred-Acceptance (DA) algorithm guaranteeing the stability of each generated schedule is developed. The algorithm can generate all stable non-dominated points by considering the trade-off between the spilled airline and passenger demand and maximum displacement. The work conducted in this thesis addresses several problem characteristics and sheds light on their implications for ASA decision-making, hence having the potential to improve ASA decision-making. Our findings suggest that the consideration of airlines’ timing flexibility (research question I) results in improved capacity utilisation and scheduling efficiency. The endogenous consideration of the ability of the airport’s infrastructure to adapt to the characteristics of airline demand (research question II) enables a more efficient representation of airport declared capacity that results in the scheduling of additional requests. The concurrent consideration of airlines’ timing flexibility and the endogenous adaptation of airport resources to airline demand achieves an improved alignment between the airport infrastructure and the characteristics of airline demand, ergo proposing schedules of improved efficiency. The modelling and evaluation of the peak operational delays associated with the different airport schedules (research question III) provides allows the study of the implications of strategic ASA decision-making for operations and quantifies the impact of the airport’s declared capacity on each schedule’s operational performance. In considering the preferences of the relevant ASA stakeholders (airlines, coordinators, airport, and air traffic authorities) concerning multiple operational and strategic ASA efficiency metrics (research question IV) the thesis assesses the impact of alternative preference considerations and indicates a commonly preferred schedule that balances the stakeholders’ preferences. The proposition of representative subsets of alternative schedules reduces decision-complexity without significantly compromising the quality of the alternatives offered to the decision-making process (research question IV). The modelling of the ASA as a two-sided matching game (research question V), results in stable schedules consisting of request-to-slot assignments that provide no incentive to airlines and coordinators to reject or alter the proposed timings. Furthermore, the proposition of stable schedules results in more intensive use of airport capacity, while simultaneously improving scheduling efficiency. The models and algorithms developed as part of this thesis are tested using airline requests and airport capacity data from coordinated airports. Computational results that are relevant to the context of the considered airport instances provide evidence on the potential improvements for the current ASA process and facilitate data-driven policy and decision-making. In particular, with regards to the alignment of airline demand with the capabilities of the airport’s infrastructure (questions I and II), computational results report improved slot allocation efficiency and airport capacity utilisation, which for the considered airport instance translate to improvements ranging between 5-24% for various schedule performance metrics. In reducing the difficulty associated with the assessment of multiple ASA solutions by the stakeholders (question IV), instance-specific results suggest reductions to the number of alternative schedules by 87%, while maintaining the quality of the solutions presented to the stakeholders above 70% (expressed in relation to the initially considered set of schedules). Meanwhile, computational results suggest that the concurrent consideration of ASA stakeholders’ preferences (research question IV) with regards to both operational (research question III) and strategic performance metrics leads to alternative airport slot scheduling solutions that inform on the trade-offs between the schedules’ operational and strategic performance and the stakeholders’ preferences. Concerning research question (V), the application of SASAM and the DA algorithm suggest improvements to the number of unaccommodated flights and passengers (13 and 40% improvements) at the expense of requests concerning fewer passengers and days of operations (increasing the number of rejected requests by 1.2% in relation to the total number of submitted requests). The research conducted in this thesis aids in the identification of limitations that should be addressed by future studies to further improve ASA decision-making. First, the thesis focuses on exact solution approaches that consider the landside and airside infrastructure of the airport and generate multiple schedules. The proposition of pre-processing techniques that identify the bottleneck of the airport’s capacity, i.e., landside and/or airside, can be used to reduce the size of the proposed formulations and improve the required computational times. Meanwhile, the development of multi-objective heuristic algorithms that consider several problem characteristics and generate multiple efficient schedules in reasonable computational times, could extend the capabilities of the models propositioned in this thesis and provide decision support for some of the world’s most congested airports. Furthermore, the thesis models and evaluates the operational implications of strategic airport slot scheduling decisions. The explicit consideration of operational delays as an objective in ASA optimisation models and algorithms is an issue that merits investigation since it may further improve the operational performance of the generated schedules. In accordance with current practice, the models proposed in this work have considered deterministic capacity parameters. Perhaps, future research could propose formulations that consider stochastic representations of airport declared capacity and improve strategic ASA decision-making through the anticipation of operational uncertainty and weather-induced capacity reductions. Finally, in modelling airlines’ utility for each submitted request and available time slot the thesis proposes time-dependent functions that utilise available data to approximate airlines’ scheduling preferences. Future studies wishing to improve the accuracy of the proposed functions could utilise commercial data sources that provide route-specific information; or in cases that such data is unavailable, employ data mining and machine learning methodologies to extract airlines’ time-dependent utility and preferences

    Digital asset management via distributed ledgers

    Get PDF
    Distributed ledgers rose to prominence with the advent of Bitcoin, the first provably secure protocol to solve consensus in an open-participation setting. Following, active research and engineering efforts have proposed a multitude of applications and alternative designs, the most prominent being Proof-of-Stake (PoS). This thesis expands the scope of secure and efficient asset management over a distributed ledger around three axes: i) cryptography; ii) distributed systems; iii) game theory and economics. First, we analyze the security of various wallets. We start with a formal model of hardware wallets, followed by an analytical framework of PoS wallets, each outlining the unique properties of Proof-of-Work (PoW) and PoS respectively. The latter also provides a rigorous design to form collaborative participating entities, called stake pools. We then propose Conclave, a stake pool design which enables a group of parties to participate in a PoS system in a collaborative manner, without a central operator. Second, we focus on efficiency. Decentralized systems are aimed at thousands of users across the globe, so a rigorous design for minimizing memory and storage consumption is a prerequisite for scalability. To that end, we frame ledger maintenance as an optimization problem and design a multi-tier framework for designing wallets which ensure that updates increase the ledger’s global state only to a minimal extent, while preserving the security guarantees outlined in the security analysis. Third, we explore incentive-compatibility and analyze blockchain systems from a micro and a macroeconomic perspective. We enrich our cryptographic and systems' results by analyzing the incentives of collective pools and designing a state efficient Bitcoin fee function. We then analyze the Nash dynamics of distributed ledgers, introducing a formal model that evaluates whether rational, utility-maximizing participants are disincentivized from exhibiting undesirable infractions, and highlighting the differences between PoW and PoS-based ledgers, both in a standalone setting and under external parameters, like market price fluctuations. We conclude by introducing a macroeconomic principle, cryptocurrency egalitarianism, and then describing two mechanisms for enabling taxation in blockchain-based currency systems

    Management controls, government regulations, customer involvement: Evidence from a Chinese family-owned business

    Get PDF
    This research reports on a case study of a family-owned elevator manufacturing company in China, where management control was sandwiched between the state policies and global customer production requirements. By analysing the role of government and customer, this thesis aimed to illustrate how management control operated in a family-owned business and to see how and why they do management control differently. In particular, it focused on how international production standards and existing Chinese industry policies translated into a set of the management control practices through a local network within the family-owned business I studied. Based on an ethnographic approach to research, I spent six months in the field, conducted over 30 interviews, several conservations, and reviewed relevant internal documents to understand how management control (MC) techniques with humans cooperated in the company. I also understood how two layers of pressure have shaped company behaviour, and how a company located in a developing country is connecting with global network. I also found there is considerable tension among key actors and investigated how the company responded and managed it. Drawing on Actor Network Theory (ANT), I analysed the interviews from key actors, examined the role of government regulations and customer requirements to see how management control being managed under two layers of pressure, i.e., the government regulations (e.g., labour, tax, environment control) and customer requirement (e.g., quality and production control). Management controls were an obligatory passage point (OPP), and transformation of those elements of Western production requirements and government requirements arrived at the Chinese local factory and influenced management control and budgeting. The findings suggest that management control systems are not only a set of technical procedures, but it is also about managing tensions. This understanding shows a linear perspective on MC practices rather than a social perspective. However, when we use ANT as a theoretical perspective, we see those actors who, being obliged and sandwiched, and controlled by external forces for them to follow. Consequently, human actors must work in an unavoidable OPP. This is the tension they face which constructed mundane practices of MC. Hence, MCs are managing such tensions. This study contributes to management control research by analysing management controls in terms of OPP, extends our understanding by illustrating the role of the government and customers, and our understanding of family-owned business from a management controls perspective in a developing country

    Machine learning for managing structured and semi-structured data

    Get PDF
    As the digitalization of private, commercial, and public sectors advances rapidly, an increasing amount of data is becoming available. In order to gain insights or knowledge from these enormous amounts of raw data, a deep analysis is essential. The immense volume requires highly automated processes with minimal manual interaction. In recent years, machine learning methods have taken on a central role in this task. In addition to the individual data points, their interrelationships often play a decisive role, e.g. whether two patients are related to each other or whether they are treated by the same physician. Hence, relational learning is an important branch of research, which studies how to harness this explicitly available structural information between different data points. Recently, graph neural networks have gained importance. These can be considered an extension of convolutional neural networks from regular grids to general (irregular) graphs. Knowledge graphs play an essential role in representing facts about entities in a machine-readable way. While great efforts are made to store as many facts as possible in these graphs, they often remain incomplete, i.e., true facts are missing. Manual verification and expansion of the graphs is becoming increasingly difficult due to the large volume of data and must therefore be assisted or substituted by automated procedures which predict missing facts. The field of knowledge graph completion can be roughly divided into two categories: Link Prediction and Entity Alignment. In Link Prediction, machine learning models are trained to predict unknown facts between entities based on the known facts. Entity Alignment aims at identifying shared entities between graphs in order to link several such knowledge graphs based on some provided seed alignment pairs. In this thesis, we present important advances in the field of knowledge graph completion. For Entity Alignment, we show how to reduce the number of required seed alignments while maintaining performance by novel active learning techniques. We also discuss the power of textual features and show that graph-neural-network-based methods have difficulties with noisy alignment data. For Link Prediction, we demonstrate how to improve the prediction for unknown entities at training time by exploiting additional metadata on individual statements, often available in modern graphs. Supported with results from a large-scale experimental study, we present an analysis of the effect of individual components of machine learning models, e.g., the interaction function or loss criterion, on the task of link prediction. We also introduce a software library that simplifies the implementation and study of such components and makes them accessible to a wide research community, ranging from relational learning researchers to applied fields, such as life sciences. Finally, we propose a novel metric for evaluating ranking results, as used for both completion tasks. It allows for easier interpretation and comparison, especially in cases with different numbers of ranking candidates, as encountered in the de-facto standard evaluation protocols for both tasks.Mit der rasant fortschreitenden Digitalisierung des privaten, kommerziellen und öffentlichen Sektors werden immer größere Datenmengen verfügbar. Um aus diesen enormen Mengen an Rohdaten Erkenntnisse oder Wissen zu gewinnen, ist eine tiefgehende Analyse unerlässlich. Das immense Volumen erfordert hochautomatisierte Prozesse mit minimaler manueller Interaktion. In den letzten Jahren haben Methoden des maschinellen Lernens eine zentrale Rolle bei dieser Aufgabe eingenommen. Neben den einzelnen Datenpunkten spielen oft auch deren Zusammenhänge eine entscheidende Rolle, z.B. ob zwei Patienten miteinander verwandt sind oder ob sie vom selben Arzt behandelt werden. Daher ist das relationale Lernen ein wichtiger Forschungszweig, der untersucht, wie diese explizit verfügbaren strukturellen Informationen zwischen verschiedenen Datenpunkten nutzbar gemacht werden können. In letzter Zeit haben Graph Neural Networks an Bedeutung gewonnen. Diese können als eine Erweiterung von CNNs von regelmäßigen Gittern auf allgemeine (unregelmäßige) Graphen betrachtet werden. Wissensgraphen spielen eine wesentliche Rolle bei der Darstellung von Fakten über Entitäten in maschinenlesbaren Form. Obwohl große Anstrengungen unternommen werden, so viele Fakten wie möglich in diesen Graphen zu speichern, bleiben sie oft unvollständig, d. h. es fehlen Fakten. Die manuelle Überprüfung und Erweiterung der Graphen wird aufgrund der großen Datenmengen immer schwieriger und muss daher durch automatisierte Verfahren unterstützt oder ersetzt werden, die fehlende Fakten vorhersagen. Das Gebiet der Wissensgraphenvervollständigung lässt sich grob in zwei Kategorien einteilen: Link Prediction und Entity Alignment. Bei der Link Prediction werden maschinelle Lernmodelle trainiert, um unbekannte Fakten zwischen Entitäten auf der Grundlage der bekannten Fakten vorherzusagen. Entity Alignment zielt darauf ab, gemeinsame Entitäten zwischen Graphen zu identifizieren, um mehrere solcher Wissensgraphen auf der Grundlage einiger vorgegebener Paare zu verknüpfen. In dieser Arbeit stellen wir wichtige Fortschritte auf dem Gebiet der Vervollständigung von Wissensgraphen vor. Für das Entity Alignment zeigen wir, wie die Anzahl der benötigten Paare reduziert werden kann, während die Leistung durch neuartige aktive Lerntechniken erhalten bleibt. Wir erörtern auch die Leistungsfähigkeit von Textmerkmalen und zeigen, dass auf Graph-Neural-Networks basierende Methoden Schwierigkeiten mit verrauschten Paar-Daten haben. Für die Link Prediction demonstrieren wir, wie die Vorhersage für unbekannte Entitäten zur Trainingszeit verbessert werden kann, indem zusätzliche Metadaten zu einzelnen Aussagen genutzt werden, die oft in modernen Graphen verfügbar sind. Gestützt auf Ergebnisse einer groß angelegten experimentellen Studie präsentieren wir eine Analyse der Auswirkungen einzelner Komponenten von Modellen des maschinellen Lernens, z. B. der Interaktionsfunktion oder des Verlustkriteriums, auf die Aufgabe der Link Prediction. Außerdem stellen wir eine Softwarebibliothek vor, die die Implementierung und Untersuchung solcher Komponenten vereinfacht und sie einer breiten Forschungsgemeinschaft zugänglich macht, die von Forschern im Bereich des relationalen Lernens bis hin zu angewandten Bereichen wie den Biowissenschaften reicht. Schließlich schlagen wir eine neuartige Metrik für die Bewertung von Ranking-Ergebnissen vor, wie sie für beide Aufgaben verwendet wird. Sie ermöglicht eine einfachere Interpretation und einen leichteren Vergleich, insbesondere in Fällen mit einer unterschiedlichen Anzahl von Kandidaten, wie sie in den de-facto Standardbewertungsprotokollen für beide Aufgaben vorkommen

    Interdisciplinary and Universal Digital Competence for Foreign Languages Education

    Get PDF
    Development of the knowledge economy, establishment of networked society, emergency digitization due to the COVID-19 pandemic measures have imposed elaborate interdisciplinary and transdisciplinary demands on the marketability of Liberal Arts university graduates’ skills and competences, upon entering the workforce. The study is focused on the in-depth diagnostics of the development of multipurpose orientation, universality and interdisciplinarity of skillsets for students of European (English, Spanish, French, Italian, German) and Oriental (Mandarin Chinese, Japanese) Languages major programs through the span of educational activities in the time-frame of COVID-19 quarantine measures of March 2020 to December 2021. A computational framework of foreign languages education interdisciplinarity is introduced in the study. The survey analysis is used to evaluate the dimensions of interdisciplinarity, universality and transdiciplinarity, informed by the interoperability of soft skills and digital communication skills for Foreign Languages Education across contrasting timeframes and stages of foreign languages acquisition and early career training.Розвиток економіки знань, становлення мережеве суспільство, екстрена оцифровка через COVID-19 пандемічні заходи наклали на продумані міждисциплінарні та трансдисциплінарні вимоги до товарності вільних мистецтв навички та компетенції випускників університету при вступі до робоча сила. Дослідження зосереджено на поглибленій діагностиці розвиток багатоцільової спрямованості, універсальності і міждисциплінарність наборів навичок для студентів європейських (англійська, Іспанська, французька, італійська, німецька) та східна (китайська, японська) Основні мовні програми по всьому проміжку освітніх заходів у період COVID-19 Карантинні заходи з березня 2020 року по грудень 2021 року обчислювальна база навчання іноземними мовами в дослідженні вводиться міждисциплінарність. Аналіз опитування використовується для оцінки вимірів міждисциплінарності, універсальність і рансдисциплінарність, інформує в сумісність soft skills та навичок цифрового спілкування для Навчання іноземних мов у різних часових рамках та етапи оволодіння іноземними мовами та раннє професійне навчання
    corecore