1,273 research outputs found

    Transferable Natural Language Interface to Structured Queries aided by Adversarial Generation

    Full text link
    A natural language interface (NLI) to structured query is intriguing due to its wide industrial applications and high economical values. In this work, we tackle the problem of domain adaptation for NLI with limited data on target domain. Two important approaches are considered: (a) effective general-knowledge-learning on source domain semantic parsing, and (b) data augmentation on target domain. We present a Structured Query Inference Network (SQIN) to enhance learning for domain adaptation, by separating schema information from NL and decoding SQL in a more structural-aware manner; we also propose a GAN-based augmentation technique (AugmentGAN) to mitigate the issue of lacking target domain data. We report solid results on GeoQuery, Overnight, and WikiSQL to demonstrate state-of-the-art performances for both in-domain and domain-transfer tasks.Comment: 8 pages, 3 figures; accepted by AAAI Workshop 2019; accepted by International Conference of Semantic Computing (ICSC) 201

    On an inverse elastic wave imaging scheme for nearly incompressible materials

    Full text link
    This paper is devoted to the algorithmic development of inverse elastic scattering problems. We focus on reconstructing the locations and shapes of elastic scatterers with known dictionary data for the nearly incompressible materials. The scatterers include non-penetrable rigid obstacles and penetrable mediums, and we use time-harmonic elastic point signals as the incident input waves. The scattered waves are collected in a relatively small backscattering aperture on a bounded surface. A two-stage algorithm is proposed for the reconstruction and only two incident waves of different wavenumbers are required. The unknown scatterer is first approximately located by using the measured data at a small wavenumber, and then the shape of the scatterer is determined by the computed location of the scatterer together with the measured data at a regular wavenumber. The corresponding mathematical principle with rigorous analysis is presented. Numerical tests illustrate the effectiveness and efficiency of the proposed method

    Effects of increasing concentrations of corn distiller\u27s dried grains with solubles on the egg production, internal quality of eggs, chemical composition and nutrients content of egg yolk

    Get PDF
    The objective of this study was to determine the effects of feeding high levels of corn distiller\u27s dried grains with solubles (DDGS) on egg production, internal quality of eggs, chemical composition and important nutrients content of egg yolk. Four diets were formulated to contain 0, 17, 35 or 50% corn DDGS in a corn and soybean meal base. A total of 240 54 -week-old single-comb White Leghorn laying hens were randomly allotted with 2 birds per cage with three consecutive cages representing an experimental unit (EU). Each EU was assigned to one of the four dietary treatments according to a completely randomized design. Hens were fed for a 24-week experimental period after transition feeding to gradually increase corn DDGS inclusion over a four-week period. After the first 12-wk period, the diets were reformulated to meet amino acid requirements. Egg production was recorded daily and feed consumption was measured weekly. Egg component, yolk color, Haugh unit during storage, and shell breaking strength were measured every two weeks. Chemical composition and nutritional components in egg yolk were measured every two weeks. Chemical composition of egg yolk including protein, lipids, and moisture was determined. The nutritional components in egg yolk, including fatty acid composition, and the content of cholesterol, lutein, and choline were measured. Egg production, egg weight, egg mass, feed intake, and feed efficiency were adversely affected by the highest level of DDGS (50%) in the diet before diet reformulation. Once diets were reformulated with increased concentrations of lysine and methionine, differences among the dietary treatments were reduced and the performance of the 50% DDGS diets was improved significantly, and no differences in egg production, egg weight and feed intake among DDGS treatments were found during the last 6 weeks of study. DDGS diets positively affected the internal quality of eggs during storage. Yolk color increased linearly as DDGS concentration increased, and Haugh unit was improved from 50% DDGS diet treatment group. Shell breaking strength was not influenced by DDGS diets. Shell weight percentage increased at 50% dietary DDGS level. Egg yolk from hens fed highest DDGS-containing diet tended to have higher fat content and lower protein content. Total polyunsaturated fatty acids were significantly increased by DDGS diet. The contents of choline and cholesterol were initially higher in 50% DDGS treatment group, but the difference among four DDGS treatments reduced in the later period, especially no difference was found during the last 4 weeks. Lutein content increased linearly as DDGS levels increased. It was concluded that up to 50% of DDGS could be included in the layer\u27s diet without affecting egg weight, feed intake, egg production, and egg internal quality as long as digestible amino acids were sufficient in DDGS-added diets. Moreover, this study indicated that feeding high levels of DDGS can increase the content of lutein and polyunsaturated fatty acids in egg yolk, but may not influence the content of cholesterol and choline

    Quantifiable non-functional requirements modeling and static verification for web service compositions

    Get PDF
    As service oriented architectures have become more widespread in industry, many complex web services are assembled using independently developed modular services from different vendors. Although the functionalities of the composite web services are ensured during the composition process, the non-functional requirements (NFRs) are often ignored in this process. Since quality of services plays a more and more important role in modern service-based systems, there is a growing need for effective approaches to verifying that a composite web service not only offers the required functionality but also satisfies the desired NFRs. Current approaches to verifying NFRs of composite services (as opposed to individual services) remain largely ad-hoc and informal in nature. This is especially problematic for high-assurance composite web services. High-assurance composite web services are those composite web services with special concern on critical NFRs such as security, safety and reliability. Examples of such applications include traffic control, medical decision support and the coordinated response systems for civil emergencies. The latter serves to motivate and illustrate the work described here. In this dissertation we develop techniques for ensuring that a composite service meets the user-specified NFRs expressible as hard constraints, e.g., the messages of particular operations must be authenticated. We introduce an automata-based framework for verifying that a composite service satisfies the desired NFRs based on the known guarantees regarding the non-functional properties of the component services. This automata-based model is able to represent NFRs that are hard, quantitative constraints on the composite web services. This model addresses two issues previously not handled in the modeling and verification of NFRs for composite web services: (1) the scope of the NFRs and (2) consistency checking of multiple NFRs. A scope of a NFR on a web service composition is the effective range of the NFR on the sub-workflows and modular services of the web service composition. It allows more precise description of a NFR constraint and more efficient verification. When multiple NFRs exist and overlap in their scopes, consistency checking is necessary to avoid wasted verification efforts on conflicting constraints. The approach presented here captures scope information in the model and uses it to check the consistency of multiple NFRs prior to the static verification of web service compositions. We illustrate how our approach can be used to verify security requirements for an Emergency Management System. We then focus on families of highly-customizable, composed web services where repeated verification of similar sets of NFRs can waste computation resources. We introduce a new approach to extend software product line engineering techniques to the web service composition domain. The resulting technique uses a partitioning similar to that between domain engineering and application engineering in the product-line context. It specifies the options that the user can select and constructs the resulting web service compositions. By first creating a web-service composition search space that satisfies the common requirements and then querying the search space as the user makes customization decisions, the technique provides a more efficient way to verify customizable web services. A decision model, illustrated with examples from the emergency-response application, is created to interact with the customers and ensure the consistency of their specifications. The capability to reuse the composition search space is shown to improve the quality of the composite services and reduce the cost of re-verifying the same compositions. By distinguishing the commonalities and the variabilities of the web services, we divide the web composition into two stages: the preparation stage (to construct all commonalities) and the customization stage (to choose optional and alternative features). We thus draw most of the computation overhead into the first stage during the design in order to enable improved runtime efficiency during the second stage. A simulation platform was constructed to conduct experiments on the two verification approaches and three strategies introduced in this dissertation. The results of these experiments were analyzed to show the advantage of our automaton-based model in its verification efficiency with scoping information. We have shown how to choose the most efficient verification strategy from the three strategies of verifying multiple NFRs introduced in this dissertation under different circumstances. The results indicate that the software product line approach has significant efficiency improvement over traditional on-demand verification for highly customizable web service compositions

    Automata-Based Verification of Non-Functional Requirements in Web Service Composition

    Get PDF
    We address the problem of how to provide guarantees to a user that an automatically generated composition of independently developed web services meets the non-functional requirements (NFR). The user-specified NFR are in the form of hard constraints. We introduce an automata-based model for representing and reasoning about non-functional requirements for verifying the conformance to NFR. The approach described here enables this verification by lifting the NFR analysis from the level of individual services to the level of the search space of candidate compositions obtained from the functional requirements. The proposed approach can accommodate the different subsets of NFR for different components of a composite service. We introduce three different strategies when multiple NFRs exist and analyze their relative advantages and disadvantages under different scenarios. We present results which show that this approach to verifying the NFR can support efficient re-verification of web-service compositions whenever NFR are updated. The approach described here has been applied in service composition based on NFR in an Emergency Management System

    Symmetric Wasserstein Autoencoders

    Full text link
    Leveraging the framework of Optimal Transport, we introduce a new family of generative autoencoders with a learnable prior, called Symmetric Wasserstein Autoencoders (SWAEs). We propose to symmetrically match the joint distributions of the observed data and the latent representation induced by the encoder and the decoder. The resulting algorithm jointly optimizes the modelling losses in both the data and the latent spaces with the loss in the data space leading to the denoising effect. With the symmetric treatment of the data and the latent representation, the algorithm implicitly preserves the local structure of the data in the latent space. To further improve the quality of the latent representation, we incorporate a reconstruction loss into the objective, which significantly benefits both the generation and reconstruction. We empirically show the superior performance of SWAEs over the state-of-the-art generative autoencoders in terms of classification, reconstruction, and generation.Comment: Accepted by UAI202

    Adaptive Scaling for Sparse Detection in Information Extraction

    Full text link
    This paper focuses on detection tasks in information extraction, where positive instances are sparsely distributed and models are usually evaluated using F-measure on positive classes. These characteristics often result in deficient performance of neural network based detection models. In this paper, we propose adaptive scaling, an algorithm which can handle the positive sparsity problem and directly optimize over F-measure via dynamic cost-sensitive learning. To this end, we borrow the idea of marginal utility from economics and propose a theoretical framework for instance importance measuring without introducing any additional hyper-parameters. Experiments show that our algorithm leads to a more effective and stable training of neural network based detection models.Comment: Accepted to ACL201

    Uniqueness in inverse acoustic scattering with phaseless near-field measurements

    Full text link
    This paper is devoted to the uniqueness of inverse acoustic scattering problems with the modulus of near-field data. By utilizing the superpositions of point sources as the incident waves, we rigorously prove that the phaseless near-fields collected on an admissible surface can uniquely determine the location and shape of the obstacle as well as its boundary condition and the refractive index of a medium inclusion, respectively. We also establish the uniqueness in determining a locally rough surface from the phaseless near-field data due to superpositions of point sources. These are novel uniqueness results in inverse scattering with phaseless near-field data.Comment: 17 pages, 2 figures. arXiv admin note: substantial text overlap with arXiv:1812.0329

    Two Single-shot Methods for Locating Multiple Electromagnetic Scatterers

    Full text link
    We develop two inverse scattering schemes for locating multiple electromagnetic (EM) scatterers by the electric far-field measurement corresponding to a single incident/detecting plane wave. The first scheme is for locating scatterers of small size compared to the wavelength of the detecting plane wave. The multiple scatterers could be extremely general with an unknown number of components, and each scatterer component could be either an impenetrable perfectly conducting obstacle or a penetrable inhomogeneous medium with an unknown content. The second scheme is for locating multiple perfectly conducting obstacles of regular size compared to the detecting EM wavelength. The number of the obstacle components is not required to be known in advance, but the shape of each component must be from a certain known admissible class. The admissible class may consist of multiple different reference obstacles. The second scheme could also be extended to include the medium components if a certain generic condition is satisfied. Both schemes are based on some novel indicator functions whose indicating behaviors could be used to locate the scatterers. No inversion will be involved in calculating the indicator functions, and the proposed methods are every efficient and robust to noise. Rigorous mathematical justifications are provided and extensive numerical experiments are conducted to illustrate the effectiveness of the imaging schemes

    Resource Allocation in Cloud Radio Access Networks with Device-to-Device Communications

    Full text link
    To alleviate the burdens on the fronthaul and reduce the transmit latency, the device-to-device (D2D) communication is presented in cloud radio access networks (C-RANs). Considering dynamic traffic arrivals and time-varying channel conditions, the resource allocation in C-RANs with D2D is formulated into a stochastic optimization problem, which is aimed at maximizing the overall throughput subject to network stability, interference, and fronthaul capacity constraints. Leveraging on the Lyapunov optimization technique, the stochastic optimization problem is transformed into a delay-aware optimization problem, which is a mixed-integer nonlinear programming problem and can be decomposed into three subproblems: mode selection, uplink beamforming design, and power control. An optimization solution that consists of a modified branch and bound method as well as a weighted minimum mean square error approach has been developed to obtain the close-to-optimal solution. Simulation results validate that the D2D can improve throughput, decrease latency, and alleviate the burdens of the constrained fronthaul in C-RANs. Furthermore, an average throughput-delay tradeoff can be achieved by the proposed solution
    corecore