721 research outputs found

    Phonological features emerge substance-freely from the phonetics and the morphology

    Get PDF
    Theories of phonology claim variously that phonological elements are either innate or emergent, and either substance-full or substance-free. A hitherto underdeveloped source of evidence for choosing between the four possible combinations of these claims lies in showing precisely how a child can acquire phonological elements. This article presents computer simulations that showcase a learning algorithm with which the learner creates phonological elements from a large number of sound–meaning pairs. In the course of language acquisition, phonological fea- tures gradually emerge both bottom-up and top-down, that is, both from the phonetic input (i.e., sound) and from the semantic or morphological input (i.e., structured meaning). In our computer simulations, the child’s phonological features end up with emerged links to sounds (phonetic sub- stance) as well as with emerged links to meanings (semantic substance), without containing either phonetic or semantic substance. These simulations therefore show that emergent substance-free phonological features are learnable. In the absence of learning algorithms for linking innate features to the language-specific variable phonetic reality, as well as the absence of learning algo- rithms for substance-full emergence, these results provide a new type of support for theories of phonology in which features are emergent and substance-free

    Orchestrated learning : creating a company-specific production system (XPS)

    Get PDF
    Author's accepted manuscript.This author accepted manuscript is deposited under a Creative Commons Attribution Non-commercial 4.0 International (CC BY-NC) licence. This means that anyone may distribute, adapt, and build upon the work for non-commercial purposes, subject to full attribution. If you wish to use this manuscript for commercial purposes, please contact [email protected]: Companies create company-specific production systems (XPS) by tailoring generic concepts to fit their unique situation. However, little is known about how an XPS is created. This paper aims to provide insights into the creation of an XPS. Design/methodology/approach: A retrospective case study was conducted in a Norwegian multinational company over the period 1991–2006, using archival data and interviews. Findings: The development of the XPS did not start with a master plan. Instead, dispersed existing initiatives were built upon, along with an external search for novel ideas. Widespread experimentation took place, only later to be combined into a coherent approach. Once established, the XPS was disseminated internally and further refined. The CEO orchestrated the experimentation by facilitating the adaptation and combination of different concepts and by allocating resources to institutionalize the XPS in the global network. Originality/value: This paper is the first to study how an XPS is created. This study contributes with novel empirical insights, and it highlights the role of top management in facilitating experimentation and step-by-step organizational learning.acceptedVersio

    Automatic segmentation of MR brain images with a convolutional neural network

    Full text link
    Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages. This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure that the method obtains accurate segmentation details as well as spatial consistency, the network uses multiple patch sizes and multiple convolution kernel sizes to acquire multi-scale information about each voxel. The method is not dependent on explicit features, but learns to recognise the information that is important for the classification based on training data. The method requires a single anatomical MR image only. The segmentation method is applied to five different data sets: coronal T2-weighted images of preterm infants acquired at 30 weeks postmenstrual age (PMA) and 40 weeks PMA, axial T2- weighted images of preterm infants acquired at 40 weeks PMA, axial T1-weighted images of ageing adults acquired at an average age of 70 years, and T1-weighted images of young adults acquired at an average age of 23 years. The method obtained the following average Dice coefficients over all segmented tissue classes for each data set, respectively: 0.87, 0.82, 0.84, 0.86 and 0.91. The results demonstrate that the method obtains accurate segmentations in all five sets, and hence demonstrates its robustness to differences in age and acquisition protocol

    Balancing responsibilities:Effects of growth of variable renewable energy, storage, and undue grid interaction

    Get PDF
    Electrical energy storage is often proposed as a solution for the mismatch between supply patterns of variable renewable electricity sources and electricity demand patterns. However, effectiveness and usefulness of storage may vary under different circumstances. This study provides an abstract perspective on the merits of electrical energy storage integrated with decentralized supply systems consisting of solar PV and wind power in a mesolevel, residential sector context. We used a balancing model to couple demand and supply patterns based on Dutch weather data and assess the resultant loads given various scenarios. Our model results highlight differences in storage effectiveness for solar PV and wind power, and strong diminishing-returns effects. Small storage capacities can be functional in reducing surpluses in overdimensioned supply systems and shortages in under-dimensioned supply systems. However, full elimination of imbalance requires substantial storage capacities. The overall potential of storage to mitigate imbalance of variable renewable energy is limited. Integration of storage in local supply systems may have self-sufficiency and cost-effectiveness benefits for prosumers but may have additional peak load disadvantages for grid operators. Adequate policy measures beyond current curtailment strategies are required to ensure proper distribution of benefits and responsibilities associated with variable renewable energy and storage

    System analysis of the bio-based economy in Colombia: A bottom-up energy system model and scenario analysis

    Get PDF
    The transition to a sustainable bio‐based economy is perceived as a valid path towards low‐carbon development for emerging economies that have rich biomass resources. In the case of Colombia, the role of biomass has been tackled through qualitative roadmaps and regional climate policy assessments. However, neither of these approaches has addressed the complexity of the bio‐based economy systematically in the wider context of emission mitigation and energy and chemicals supply. In response to this limitation, we extended a bottom‐up energy system optimization model by adding a comprehensive database of novel bio‐based value chains. We included advanced road and aviation biofuels, (bio)chemicals, bioenergy with carbon capture and storage (BECCS), and integrated biorefinery configurations. A scenario analysis was conducted for the period 2015–2050, which reflected uncertainties in the capacity for technological learning, climate policy ambitions, and land availability for energy crops. Our results indicate that biomass can play an important, even if variable, role in supplying 315–760 PJ/y of modern bio‐based products. In pursuit of a deep decarbonization trajectory, the large‐scale mobilization of biomass resources can reduce the cost of the energy system by up to 11 billion $/year, the marginal abatement cost by 62%, and the potential reliance on imports of oil and chemicals in the future. The mitigation potential of BECCS can reach 24–29% of the cumulative avoided emissions between 2015 and 2050. The proposed system analysis framework can provide detailed quantitative information on the role of biomass in low carbon development of emerging economies

    Role of focused assessment with sonography for trauma as a screening tool for blunt abdominal trauma in young children after high energy trauma

    Get PDF
    Background: The objective of the study was to review the utility of focused assessement with sonography for trauma (FAST) as a screening tool for blunt abdominal trauma (BAT) in children involved in high energy trauma (HET), and to determine whether a FAST could replace computed tomography (CT) in clinical decision-making regarding paediatric BAT. Method: Children presented at the Trauma Unit of the Red Cross War Memorial Children's Hospital, Cape Town, after HET, and underwent both a physical examination and a FAST. The presence of free fluid in the abdomen and pelvis was assessed using a FAST. Sensitivity, specificity, and positive and negative predictive values (PPV and NPV) for identifying intraabdominal injury were calculated for the physical examination and the FAST, both individually and when combined. Results: Seventy-five patients were included as per the criteria for HET as follows: pedestrian motor vehicle crashes (MVCs) (n = 46), assault (n = 14), fall from a height (n = 9), MVC passenger (n = 4) and other (n = 2). The ages of the patients ranged from 3 months to 13 years. The sensitivity of the physical examination was 0.80, specificity 0.83, PPV 0.42 and NPV 0.96. The sensitivity of the FAST was 0.50, specificity 1.00, PPV 1.00 and NPV 0.93. Sensitivity increased to 0.90 when the physical examination was combined with the FAST. Nonoperative management was used in 73 patients. Two underwent an operation. Conclusion: A FAST should be performed in combination with a physical examination on every paediatric patient involved in HET to detect BAT. When both are negative, nonoperative management can be implemented without fear of missing a clinically significant injury. FAST is a safe, effective and easily accessible alternative to CT, which avoids ionising radiation and aids in clinical decision-making

    On the Number of Iterations for Dantzig-Wolfe Optimization and Packing-Covering Approximation Algorithms

    Get PDF
    We give a lower bound on the iteration complexity of a natural class of Lagrangean-relaxation algorithms for approximately solving packing/covering linear programs. We show that, given an input with mm random 0/1-constraints on nn variables, with high probability, any such algorithm requires Ω(ρlog(m)/ϵ2)\Omega(\rho \log(m)/\epsilon^2) iterations to compute a (1+ϵ)(1+\epsilon)-approximate solution, where ρ\rho is the width of the input. The bound is tight for a range of the parameters (m,n,ρ,ϵ)(m,n,\rho,\epsilon). The algorithms in the class include Dantzig-Wolfe decomposition, Benders' decomposition, Lagrangean relaxation as developed by Held and Karp [1971] for lower-bounding TSP, and many others (e.g. by Plotkin, Shmoys, and Tardos [1988] and Grigoriadis and Khachiyan [1996]). To prove the bound, we use a discrepancy argument to show an analogous lower bound on the support size of (1+ϵ)(1+\epsilon)-approximate mixed strategies for random two-player zero-sum 0/1-matrix games
    corecore