768 research outputs found

    Block copolymer self-assembly for nanophotonics

    Get PDF
    The ability to control and modulate the interaction of light with matter is crucial to achieve desired optical properties including reflection, transmission, and selective polarization. Photonic materials rely upon precise control over the composition and morphology to establish periodic interactions with light on the wavelength and sub-wavelength length scales. Supramolecular assembly provides a natural solution allowing the encoding of a desired 3D architecture into the chemical building blocks and assembly conditions. The compatibility with solution processing and low-overhead manufacturing is a significant advantage over more complex approaches such as lithography or colloidal assembly. Here we review recent advances on photonic architectures derived from block copolymers and highlight the influence and complexity of processing pathways. Notable examples that have emerged from this unique synthesis platform include Bragg reflectors, antireflective coatings, and chiral metamaterials. We further predict expanded photonic capabilities and limits of these approaches in light of future developments of the field

    Sensemaking: Bringing theories and tools together

    Full text link
    This work is an attempt to reconcile three separate but influential threads in study of sensemaking. The first two of these are theories from different domains, human computer interaction (HCI) and social/organizational psychology. The last thread is that of design, of sensemaking support tools. Integrated, these three threads form a strong foundation for researchers, tool-designers and ultimately sensemakers themselves. Understanding and supporting the special role of people-people interaction can help us tie these separate threads together. This synthesis also suggests further research that can expand our understanding of sensemaking.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/57322/1/14504301249_ftp.pd

    Developmental Bootstrapping of AIs

    Full text link
    Although some current AIs surpass human abilities in closed artificial worlds such as board games, their abilities in the real world are limited. They make strange mistakes and do not notice them. They cannot be instructed easily, fail to use common sense, and lack curiosity. They do not make good collaborators. Mainstream approaches for creating AIs are the traditional manually-constructed symbolic AI approach and generative and deep learning AI approaches including large language models (LLMs). These systems are not well suited for creating robust and trustworthy AIs. Although it is outside of the mainstream, the developmental bootstrapping approach has more potential. In developmental bootstrapping, AIs develop competences like human children do. They start with innate competences. They interact with the environment and learn from their interactions. They incrementally extend their innate competences with self-developed competences. They interact and learn from people and establish perceptual, cognitive, and common grounding. They acquire the competences they need through bootstrapping. However, developmental robotics has not yet produced AIs with robust adult-level competences. Projects have typically stopped at the Toddler Barrier corresponding to human infant development at about two years of age, before their speech is fluent. They also do not bridge the Reading Barrier, to skillfully and skeptically draw on the socially developed information resources that power current LLMs. The next competences in human cognitive development involve intrinsic motivation, imitation learning, imagination, coordination, and communication. This position paper lays out the logic, prospects, gaps, and challenges for extending the practice of developmental bootstrapping to acquire further competences and create robust, resilient, and human-compatible AIs.Comment: 102 pages, 29 figure

    A Randomized Controlled Trial on the Impact of Polyglot Programming in a Database Context

    Get PDF
    Using more than one programming language in the same project is common practice. Often, additional languages might be introduced to projects to solve specific issues. While the practice is common, it is unclear whether it has an impact on developer productivity. In this paper, we present a pilot study investigating what happens when programmers switch between programming languages. The experiment is a repeated measures double-blind randomized controlled trial with 3 groups with various kinds of code switching in a database context. Results provide a rigorous testing methodology that can be replicated by us or others and a theoretical backing for why these effects might exist from the linguistics literature

    Variation Factors in the Design and Analysis of Replicated Controlled Experiments - Three (Dis)similar Studies on Inspections versus Unit Testing

    Get PDF
    Background. In formal experiments on software engineering, the number of factors that may impact an outcome is very high. Some factors are controlled and change by design, while others are are either unforeseen or due to chance. Aims. This paper aims to explore how context factors change in a series of for- mal experiments and to identify implications for experimentation and replication practices to enable learning from experimentation. Method. We analyze three experiments on code inspections and structural unit testing. The first two experiments use the same experimental design and instrumentation (replication), while the third, conducted by different researchers, replaces the programs and adapts defect detection methods accordingly (reproduction). Experimental procedures and location also differ between the experiments. Results. Contrary to expectations, there are significant differences between the original experiment and the replication, as well as compared to the reproduction. Some of the differences are due to factors other than the ones designed to vary between experiments, indicating the sensitivity to context factors in software engineering experimentation. Conclusions. In aggregate, the analysis indicates that reducing the complexity of software engineering experiments should be considered by researchers who want to obtain reliable and repeatable empirical measures

    Testing coupling relationships in object-oriented programs

    Get PDF
    As we move toward developing object‐oriented (OO) programs, the complexity traditionally found in functions and procedures is moving to the connections among components. Different faults occur when components are integrated to form higher‐level structures that aggregate the behavior and state. Consequently, we need to place more effort on testing the connections among components. Although OO technologies provide abstraction mechanisms for building components that can then be integrated to form applications, it also adds new compositional relations that can contain faults. This paper describes techniques for analyzing and testing the polymorphic relationships that occur in OO software. The techniques adapt traditional data flow coverage criteria to consider definitions and uses among state variables of classes, particularly in the presence of inheritance, dynamic binding, and polymorphic overriding of state variables and methods. The application of these techniques can result in an increased ability to find faults and to create an overall higher quality software

    Constraint-bounded design search

    Get PDF
    The design process requires continual checking of the consistency of design choices against given sets of goals that have been fulfilled. Such a check is generally performed by comparing abstract representations of design goals with these of the sought real building objects (RBO) resulting from complex intellectual activities closely related to the designer's culture and to the environment in which he operates. In this chapter we define a possible formalization of such representations concerning the goals and the RBO that are usually considered in the architectural design process by our culture in our environment. The representation of design goals is performed by expressing their objective aspects (requirements) and by defining their allowable values (performance specifications). The resulting system of requirements defines the set of allowable solutions and infers an abstract representation of the sought building objects (BO) that consists of the set of characteristics (attributes and relations) which are considered relevant to represent the particular kind of RBO with respect to the consistency check with design goals. The values related to such characteristics define the performances of the RBO while their set establishes its behaviour. Generally speaking, there is no single real object corresponding to an abstract representation but the whole class of the RBO that are equivalent with respect to the values assumed by the considered characteristics. The more we increase the number of these, as well as their specifications, the smaller the class becomes until it coincides with a single real object - given that the assessed specifications be fully consistent. On the other hand, the corresponding representation evolves to the total prefiguration of the RBO. It is not therefore possible to completely define a BO representation in advance since this is inferred by the considered goals and is itself a result of the design process. What can only be established in advance is that any set of characteristics assumed to represent any RBO consists of hierarchic, topological, geometrical and functional relations among the parts of the object at any level of aggregation (from components to space units, to building units, to the whole building) that we define representation structure (RS). Consequently the RS may be thought as the elementary structures that, by superposition and interaction, set up the abstract representation that best fit with design goals

    Ordered Mesoporous to Macroporous Oxides with Tunable Isomorphic Architectures: Solution Criteria for Persistent Micelle Templates

    Get PDF
    Porous and nanoscale architectures of inorganic materials have become crucial for a range of energy and catalysis applications, where the ability to control the morphology largely determines the transport characteristics and device performance. Despite the availability of a range of block copolymer self-assembly methods, the conditions for tuning the key architectural features such as the inorganic wall-thickness have remained elusive. Toward this end, we have developed solution processing guidelines that enable isomorphic nanostructures with tunable wall-thickness. A new poly(ethylene oxide-b-hexyl acrylate) (PEO-b-PHA) structure-directing agent (SDA) was used to demonstrate the key solution design criteria. Specifically, the use of a polymer with a high Flory-Huggins effective interaction parameter, χ, and appropriate solution conditions leads to the kinetic entrapment of persistent micelle templates (PMT) for tunable isomorphic architectures. Solubility parameters are used to predict conditions for maintaining persistent micelle sizes despite changing equilibrium conditions. Here, the use of different inorganic loadings controls the inorganic wall-thickness with constant pore size. This versatile method enabled a record 55 nm oxide wall-thickness from micelle coassembly as well as the seamless transition from mesoporous materials to macroporous materials by varying the polymer molar mass and solution conditions. The processing guidelines are generalizable and were elaborated with three inorganic systems, including Nb2O5, WO3, and SiO2, that were thermally stable to 600 °C for access to crystalline materials

    Between Horizontality and Centralisation: Organisational Form and Practice in the Finns Party

    Get PDF
    This article provides the first comprehensive analysis of the Finns Party’s (Perussuomalaiset [PS]) formal organisation and how it operates in practice. Following the framework of this thematic issue, to what extent does the PS’s organisation follow the mass-party model and how centralised is the party in its internal decision-making? Analysis of party documents, association registries, and in-depth interviews with 24 party elite representatives reveal that the PS has developed a complex organisational structure and internal democracy since 2008. However, the power of members in regard to the party’s internal decision-making remains limited, despite the party’s leadership having facilitated a more horizontal and inclusionary organisational culture after 2017. The study reveals how the party combines radically democratic elements of its leadership selection and programme development with a very high level of centralisation of formal power in the party executive, and how the party organisationally relies on a vast and autonomous but heterogeneous network of municipal associations. The article also discusses how PS elites perceive the advantages of having a wide and active organisation characterised by low entry and participation requirements, and how party-adjacent online activism both complements and complicates the functioning of the formal party organisation
    corecore