2,619 research outputs found
An open standard for the exchange of information in the Australian timber sector
The purpose of this paper is to describe business-to-business (B2B) communication and the characteristics of an open standard for electronic communication within the Australian timber and wood products industry. Current issues, future goals and strategies for using business-to-business communication will be considered.
From the perspective of the Timber industry sector, this study is important because supply chain efficiency is a key component in an organisation's strategy to gain a competitive advantage in the marketplace. Strong improvement in supply chain performance is possible with improved business-to-business communication which is used both for building trust and providing real time marketing data.
Traditional methods such as electronic data interchange (EDI) used to facilitate B2B communication have a number of disadvantages, such as high implementation and running costs and a rigid and inflexible messaging standard. Information and communications technologies (ICT) have supported the emergence of web-based EDI which maintains the advantages of the traditional paradigm while negating the disadvantages. This has been further extended by the advent of the Semantic web which rests on the fundamental idea that web resources should be annotated with semantic markup that captures information about their meaning and facilitates meaningful machine-to-machine communication.
This paper provides an ontology using OWL (Web Ontology Language) for the Australian Timber sector that can be used in conjunction with semantic web services to provide effective and cheap B2B communications
Hypothesis Only Baselines in Natural Language Inference
We propose a hypothesis only baseline for diagnosing Natural Language
Inference (NLI). Especially when an NLI dataset assumes inference is occurring
based purely on the relationship between a context and a hypothesis, it follows
that assessing entailment relations while ignoring the provided context is a
degenerate solution. Yet, through experiments on ten distinct NLI datasets, we
find that this approach, which we refer to as a hypothesis-only model, is able
to significantly outperform a majority class baseline across a number of NLI
datasets. Our analysis suggests that statistical irregularities may allow a
model to perform NLI in some datasets beyond what should be achievable without
access to the context.Comment: Accepted at *SEM 2018 as long paper. 12 page
Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems
Natural language generation (NLG) is a critical component of spoken dialogue
and it has a significant impact both on usability and perceived quality. Most
NLG systems in common use employ rules and heuristics and tend to generate
rigid and stylised responses without the natural variation of human language.
They are also not easily scaled to systems covering multiple domains and
languages. This paper presents a statistical language generator based on a
semantically controlled Long Short-term Memory (LSTM) structure. The LSTM
generator can learn from unaligned data by jointly optimising sentence planning
and surface realisation using a simple cross entropy training criterion, and
language variation can be easily achieved by sampling from output candidates.
With fewer heuristics, an objective evaluation in two differing test domains
showed the proposed method improved performance compared to previous methods.
Human judges scored the LSTM system higher on informativeness and naturalness
and overall preferred it to the other systems.Comment: To be appear in EMNLP 201
A comparison between finite element analysis code and cosmos software
In order to fulfill various needs of different users in different applications, there must exist a mutual understanding between software house and the users. The software developers should know exactly what the users need in their work and the developed software should perform accordingly. The users could come from any discipline, and the designers and the engineers are amongst those. In this project, a study has been carried out to compare the effectiveness between COSMOS software and the finite element generated programming code in order to do stress analysis on 3D object. In this study, the amount of stress will be imposed on subject area, and the resulted effect will be visualized as a gradual changed of colors. The engineers and designers will benefit from this study in terms of increasing their skill in design and obtain a well defined design since the competition become higher day after day
NASA Langley Scientific and Technical Information Output: 1996
This document is a compilation of the scientific and technical information that the Langley Research Center has produced during the calendar year 1996. Included are citations for Formal Reports, High-Numbered Conference Publications, High-Numbered Technical Memorandums, Contractor Reports, Journal Articles and Other Publications, Meeting Presentations, Technical Talks, Computer Programs, Tech Briefs, and Patents
RegTech in public and private sectors: the nexus between data, technology and regulation
Higher regulatory compliance requirements, fast and continuous changes in regulations and high digital dynamics in the financial markets are powering RegTech (regulatory technology), defined as technology-enabled innovation applied to the world of regulation, compliance, risk management, reporting and supervision. This work builds on a systematic literature review and a bibliometric analysis of the literature on RegTech, its influential papers and authors, its main areas of research, its past and its future. The resulting multi-dimensional framework bridges across four main dimensions, starting with regulation and technology, where one or more regulations, not necessarily financial ones, are addressed with the support of technologies (e.g. artificial intelligence, DLT, blockchain, smart contracts, API). Data play a central role, as sharing them enables data ecosystems, where additional value can be attained by each market participant, while data automation and machine-readable regulations empower regulators to pull data directly from the banks' systems and combine these data with data obtained directly from customers or other external sources. Several applications emerge, both for regulated entities, covering matters of compliance, monitoring, risk management, reporting and operations, as well as for authorities, which can leverage on RegTech (SupTech) solutions to make policies, to undertake their authorising, supervising and enforcement operations, for monitoring and controlling purposes, and even to issue fines automatically. As a consequence, stakeholders can reap a series of benefits, such as higher efficiency and effectiveness, accuracy, transparency and lower compliance costs but also risks, such as cyber risk, algorithmic biases, and dehumanization
- …