15,259 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Sistema de bloqueio de computadores

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaThe use of multiple computing devices per person is increasing more and more. Nowadays is normal that mobile devices like smartphones, tablets and laptops are present in the everyday life of a single person and in many cases people use these devices to perform important operations related with their professional life. This also presents a problem, as these devices come with the user in everyday life and the fact that often they have a high monetary value means that these devices are susceptible to theft. This thesis introduces a computer locking system that distinguishes itself from existing similar systems because (i) it is designed to work independently of the Operating System(s) installed on the laptop or mobile device, (ii) depends on a firrmware driver that implements the lock operation making it resistant to storage device formats or any other attack that uses software operations. It is also explored the operation of a device that has a firrmware that follows the Unified Extensible Firmware Interface (UEFI) specification as well as the development of drivers for this type of firrmware. It was also developed a security protocol and various cryptographic techniques where explored and implemented.O uso de vários dispositivos computacionais por pessoa está a aumentar cada vez mais. Hoje em dia é normal dispositivos móveis como o smartphone, tablet e computador portátil estarem presentes no quotidiano das pessoas e em muitos casos as pessoas necessitam de realizar tarefas na sua vida profissional nestes dispositivos. Isto apresenta também um problema, como estes dispositivos acompanham o utilizador no dia a dia e pelo facto de muitas vezes terem um valor monetário elevado faz com que estes dispositivos sejam suscetíveis a roubos. Esta tese introduz um sistema de bloqueio de computadores que se distingue dos sistemas similares existentes porque, (i) _e desenhado para funcionar independentemente do(s) sistema(s) operativo(s) instalado(s) no computador portátil ou no dispositivo móvel, (ii) depende de um driver do firrmware que concretiza a operação de bloqueio fazendo com que seja resistente contra formatação do dispositivo de armazenamento ou qualquer outro ataque que tenho por base a utilização de software. É explorado então o funcionamento de um dispositivo que tenha um firmware que respeita a especificação Unfied Extensible Firmware Interface (UEFI) assim como a programação de drivers para este tipo de firmware. Foi também desenvolvido um protocolo de segurança e são exploradas várias técnicas criptográficas passiveis de serem implementadas

    PrivLava: Synthesizing Relational Data with Foreign Keys under Differential Privacy

    Full text link
    Answering database queries while preserving privacy is an important problem that has attracted considerable research attention in recent years. A canonical approach to this problem is to use synthetic data. That is, we replace the input database R with a synthetic database R* that preserves the characteristics of R, and use R* to answer queries. Existing solutions for relational data synthesis, however, either fail to provide strong privacy protection, or assume that R contains a single relation. In addition, it is challenging to extend the existing single-relation solutions to the case of multiple relations, because they are unable to model the complex correlations induced by the foreign keys. Therefore, multi-relational data synthesis with strong privacy guarantees is an open problem. In this paper, we address the above open problem by proposing PrivLava, the first solution for synthesizing relational data with foreign keys under differential privacy, a rigorous privacy framework widely adopted in both academia and industry. The key idea of PrivLava is to model the data distribution in R using graphical models, with latent variables included to capture the inter-relational correlations caused by foreign keys. We show that PrivLava supports arbitrary foreign key references that form a directed acyclic graph, and is able to tackle the common case when R contains a mixture of public and private relations. Extensive experiments on census data sets and the TPC-H benchmark demonstrate that PrivLava significantly outperforms its competitors in terms of the accuracy of aggregate queries processed on the synthetic data.Comment: This is an extended version of a SIGMOD 2023 pape

    Теорія систем мобільних інфокомунікацій. Системна архітектура

    Get PDF
    Навчальний посібник містить опис логічних та фізичних структур, процедур, алгоритмів, протоколів, принципів побудови і функціонування мереж стільникового мобільного зв’язку (до 3G) і мобільних інфокомунікацій (4G і вище), приділяючи увагу розгляду загальних архітектур мереж операторів мобільного зв’язку, їх управління і координування, неперервності еволюції розвитку засобів функціонування і способів надання послуг таких мереж. Посібник структурно має сім розділів і побудований так, що складність матеріалу зростає з кожним наступним розділом. Навчальний посібник призначено для здобувачів ступеня бакалавра за спеціальністю 172 «Телекомунікації та радіотехніка», буде також корисним для аспірантів, наукових та інженерно-технічних працівників за напрямом інформаційно-телекомунікаційних систем та технологій.The manual contains a description of the logical and physical structures, procedures, algorithms, protocols, principles of construction and operation of cellular networks for mobile communications (up to 3G) and mobile infocommunications (4G and higher), paying attention to the consideration of general architectures of mobile operators' networks, their management, and coordination, the continuous evolution of the development of the means of operation and methods of providing services of such networks. The manual has seven structural sections and is structured in such a way that the complexity of the material increases with each subsequent chapter. The textbook is intended for applicants for a bachelor's degree in specialty 172 "Telecommunications and Radio Engineering", and will also be useful to graduate students, and scientific and engineering workers in the direction of information and telecommunication systems and technologies

    A Benchmark Framework for Data Compression Techniques

    Get PDF
    Lightweight data compression is frequently applied in main memory database systems to improve query performance. The data processed by such systems is highly diverse. Moreover, there is a high number of existing lightweight compression techniques. Therefore, choosing the optimal technique for a given dataset is non-trivial. Existing approaches are based on simple rules, which do not suffice for such a complex decision. In contrast, our vision is a cost-based approach. However, this requires a detailed cost model, which can only be obtained from a systematic benchmarking of many compression algorithms on many different datasets. A naïve benchmark evaluates every algorithm under consideration separately. This yields many redundant steps and is thus inefficient. We propose an efficient and extensible benchmark framework for compression techniques. Given an ensemble of algorithms, it minimizes the overall run time of the evaluation. We experimentally show that our approach outperforms the naïve approach

    Foundations for programming and implementing effect handlers

    Get PDF
    First-class control operators provide programmers with an expressive and efficient means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and control idioms as shareable libraries. Effect handlers provide a particularly structured approach to programming with first-class control by naming control reifying operations and separating from their handling. This thesis is composed of three strands of work in which I develop operational foundations for programming and implementing effect handlers as well as exploring the expressive power of effect handlers. The first strand develops a fine-grain call-by-value core calculus of a statically typed programming language with a structural notion of effect types, as opposed to the nominal notion of effect types that dominates the literature. With the structural approach, effects need not be declared before use. The usual safety properties of statically typed programming are retained by making crucial use of row polymorphism to build and track effect signatures. The calculus features three forms of handlers: deep, shallow, and parameterised. They each offer a different approach to manipulate the control state of programs. Traditional deep handlers are defined by folds over computation trees, and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are defined by case splits (rather than folds) over computation trees. Parameterised handlers are deep handlers extended with a state value that is threaded through the folds over computation trees. To demonstrate the usefulness of effects and handlers as a practical programming abstraction I implement the essence of a small UNIX-style operating system complete with multi-user environment, time-sharing, and file I/O. The second strand studies continuation passing style (CPS) and abstract machine semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The CPS translation is obtained through a series of refinements of a basic first-order CPS translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually arriving at the notion of generalised continuation, which admit simultaneous support for deep, shallow, and parameterised handlers. The initial refinement adds support for deep handlers by representing stacks of continuations and handlers as a curried sequence of arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the CPS translation is refined once more to obtain an uncurried representation of stacks of continuations and handlers. Finally, the translation is made higher-order in order to contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for deep, shallow, and parameterised effect handlers. kinds of effect handlers. The third strand explores the expressiveness of effect handlers. First, I show that deep, shallow, and parameterised notions of handlers are interdefinable by way of typed macro-expressiveness, which provides a syntactic notion of expressiveness that affirms the existence of encodings between handlers, but it provides no information about the computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control

    A productive response to legacy system petrification

    Get PDF
    Requirements change. The requirements of a legacy information system change, often in unanticipated ways, and at a more rapid pace than the rate at which the information system itself can be evolved to support them. The capabilities of a legacy system progressively fall further and further behind their evolving requirements, in a degrading process termed petrification. As systems petrify, they deliver diminishing business value, hamper business effectiveness, and drain organisational resources. To address legacy systems, the first challenge is to understand how to shed their resistance to tracking requirements change. The second challenge is to ensure that a newly adaptable system never again petrifies into a change resistant legacy system. This thesis addresses both challenges. The approach outlined herein is underpinned by an agile migration process - termed Productive Migration - that homes in upon the specific causes of petrification within each particular legacy system and provides guidance upon how to address them. That guidance comes in part from a personalised catalogue of petrifying patterns, which capture recurring themes underlying petrification. These steer us to the problems actually present in a given legacy system, and lead us to suitable antidote productive patterns via which we can deal with those problems one by one. To prevent newly adaptable systems from again degrading into legacy systems, we appeal to a follow-on process, termed Productive Evolution, which embraces and keeps pace with change rather than resisting and falling behind it. Productive Evolution teaches us to be vigilant against signs of system petrification and helps us to nip them in the bud. The aim is to nurture systems that remain supportive of the business, that are adaptable in step with ongoing requirements change, and that continue to retain their value as significant business assets

    AI Fairness at Subgroup Level – A Structured Literature Review

    Get PDF
    AI applications in practice often fail to gain the required acceptance by stakeholders due to unfairness issues. Research has primarily investigated AI fairness on individual or group levels. However, increasing research indicates shortcomings in this two-fold view. Particularly, the non-inclusion of the heterogeneity within different groups leads to increasing demand for specific fairness consideration at the subgroup level. Subgroups emerge from the conjunction of several protected attributes. An equal distribution of classified individuals between subgroups is the fundamental goal. This paper analyzes the fundamentals of subgroup fairness and its integration in group and individual fairness. Based on a literature review, we analyze the existing concepts of subgroup fairness in research. Our paper raises awareness for this primary neglected topic in IS research and contributes to the understanding of AI subgroup fairness by providing a deeper understanding of the underlying concepts and their implications on AI development and operation in practice

    AIUCD 2022 - Proceedings

    Get PDF
    L’undicesima edizione del Convegno Nazionale dell’AIUCD-Associazione di Informatica Umanistica ha per titolo Culture digitali. Intersezioni: filosofia, arti, media. Nel titolo è presente, in maniera esplicita, la richiesta di una riflessione, metodologica e teorica, sull’interrelazione tra tecnologie digitali, scienze dell’informazione, discipline filosofiche, mondo delle arti e cultural studies
    corecore