161,587 research outputs found

    USING WEB TECHNOLOGY TO IMPROVE THE ACCOUNTING OF SMALL AND MEDIUM ENTERPRISES. AN ACADEMIC APPROACH TO IMPLEMENTATION OF IFRS

    Get PDF
    One way of supporting the accounting standard-setting process and to facilitatethe access to those standards is through the implementation of modern accounting reportingmethods using web technology. In this regard SMEs are under stress of two major factors:implementation of accounting standards and revolution in IT technology. The purpose of thispaper is to define the web accounting, explain the implications of web accounting for IFRSand discuss the key features in implementing this form of accounting for Small and MediumEnterprises(SME‘s). Web accounting is accounting software based on XML technology thatstores records and processes accounting transactions using HTTP as its primarycommunications protocol, and delivers web based information in HTML format and thentranslated in other formats. Web based accounting, will provide the benefit of cost savingsand increasing efficiency. It also will allows employees and external users (suppliers,customers and investors) a real time access to accounting data, translating reports in XBRLformat and facilitate adoption of IFRS.Web Accounting, SMEs, Web Technology, XML, XBRL, IFRS

    BIKE: Bilingual Keyphrase Experiments

    Get PDF
    This paper presents a novel strategy for translating lists of keyphrases. Typical keyphrase lists appear in scientific articles, information retrieval systems and web page meta-data. Our system combines a statistical translation model trained on a bilingual corpus of scientific papers with sense-focused look-up in a large bilingual terminological resource. For the latter, we developed a novel technique that benefits from viewing the keyphrase list as contextual help for sense disambiguation. The optimal combination of modules was discovered by a genetic algorithm. Our work applies to the French / English language pair

    GEORDi: Supporting lightweight end-user authoring and exploration of Linked Data

    No full text
    The US and UK governments have recently made much of the data created by their various departments available as data sets (often as csv files) available on the web. Known as ”open data” while these are valuable assets, much of this data remains useless because it is effectively inaccessible for citizens to access for the following reasons: (1) it is often a tedious, many step process for citizens simply to find data relevant to a query. Once the data candidate is located, it often must be downloaded and opened in a separate application simply to see if the data that may satisfy the query is contained in it. (2) It is difficult to join related data sets to create richer integrated information (3) it is particularly difficult to query either a single data set, and even harder to query across related data sets. (4) To date, one has had to be well versed in semantic web protocols like SPARQL, RDF and URI formation to integrate and query such sources as reusable linked data. Our goal has been to develop tools that will let regular, non-programmer web citizens make use of this Web of Data. To this end, we present GEORDi, a set of integrated tools and services that lets citizen users identify, explore, query and represent these open data sources over the web via Linked Data mechanisms. In this paper we describe the GEORDi process of authoring new and translating existing open data in a linkable format, GEORDi’s lens mechanism for rendering rich, plain language descriptions and views of resources, and the GEORDI link-sliding paradigm for data exploration. With these tools we demonstrate that it is possible to make the Web of open (and linked) data accessible for ordinary web citizen users

    Translating expressive ontology mappings into rewriting rules to implement query rewriting

    No full text
    The increasing amount of structured RDF data published by the Linked Data community poses a great challenge when it comes to reconcile heterogeneous schemas adopted by data publishers. For several years, the Semantic Web community has been developing algorithms for aligning data models (ontologies). Nevertheless, exploiting such ontology alignments for achieving data integration is still an under supported research topic. The semantics of ontology alignments, often defined over a logical framework, implies a reasoning step over huge amounts of data. This is often hard to implement and rarely scales on Web dimensions. This paper presents our approach for translating DL-like ontology alignments into graph patterns that can be used to implement ontological mediation in the form of SPARQL query rewriting and generation. This approach backs up a previous work for achieving SPARQL query rewriting where syntactical transformations of basic graph patterns are used. Supporting a rich ontology alignment language into our system is important for two reasons. Firstly the users can express rich alignments focusing on their semantic soundness; secondly more verbose correspondences of RDF patterns can be generated by the translation process providing a denotational semantics to the alignment language itself. The approach has been implemented into an open source Java API freely available to the community

    From Questions to Effective Answers: On the Utility of Knowledge-Driven Querying Systems for Life Sciences Data

    Get PDF
    We compare two distinct approaches for querying data in the context of the life sciences. The first approach utilizes conventional databases to store the data and intuitive form-based interfaces to facilitate easy querying of the data. These interfaces could be seen as implementing a set of "pre-canned" queries commonly used by the life science researchers that we study. The second approach is based on semantic Web technologies and is knowledge (model) driven. It utilizes a large OWL ontology and same datasets as before but associated as RDF instances of the ontology concepts. An intuitive interface is provided that allows the formulation of RDF triples-based queries. Both these approaches are being used in parallel by a team of cell biologists in their daily research activities, with the objective of gradually replacing the conventional approach with the knowledge-driven one. This provides us with a valuable opportunity to compare and qualitatively evaluate the two approaches. We describe several benefits of the knowledge-driven approach in comparison to the traditional way of accessing data, and highlight a few limitations as well. We believe that our analysis not only explicitly highlights the specific benefits and limitations of semantic Web technologies in our context but also contributes toward effective ways of translating a question in a researcher's mind into precise computational queries with the intent of obtaining effective answers from the data. While researchers often assume the benefits of semantic Web technologies, we explicitly illustrate these in practice

    VISUALIZATION FOR IDENTIFYING SERVICE RESPONSES

    Get PDF
    A secure web gateway is a type of security solution that prevents unsecured traffic from entering an internal network of an organization. By translating static log data from a secure web gateway into a meaningful and sensible format, an end user may identify issues that may cause delayed responses from services. Incorporating a visualization of a health view of a system into web gateway software may provide clarity to end users. By binding logged data into five-minute intervals for a selected daily or weekly duration and displaying the data on a single screen, an end user may easily view the health of services

    AN ANALYSIS OF TRANSLATION TECHNIQUES AND QUALITY OF THE URL: en.wikipedia.org/wiki/Boston_Tea_Party TRANSLATED BY GOOGLE TRANSLATE

    Get PDF
    This research belongs to a qualitative research employing descriptive method. It aims to describe the translation technique occurrs in the translation and the quality assessment that covers accuracy and acceptability of the sentence of en.wikipedia.org/wiki/Boston_Tea_Party web page translated by Google Translate. This research applied total sampling as the sampling technique since all sentences on en.wikipedia.org/wiki/Boston_Tea_Party web page were taken as data. This research was conducted based on primary and secondary data. The primary data consists of 117 sentences taken from en.wikipedia.org/wiki/Boston_Tea_Party web page. The secondary data were taken by distributing questionnaire to some raters. The analysis shows that Google Translate applied 7 kinds of translation techniques to translate en.wikipedia.org/wiki/Boston_Tea_Party web page. The techniques are literal, amplification, reduction, transposition, borrowing, calque, and particularization. The analysis on accuracy assessment shows that there are 18 data considered to be accurate, 96 data considered to be less accurate. And, 3 data considered to be inaccurate. It means that, in general, the translation is less accurate. The analysis on acceptability assessment shows that there are 20 data considered to be acceptable, 87 data considered to be less acceptable, and, 10 data considered to be unacceptable. It means that, in general, the translation is less acceptable. The analysis also shows that implementation of techniques makes the translation less accurate and less acceptable. It means that Google Translate can not determine a suitable technique to produce a quality translation in translating sentences found on en.wikipedia.org/wiki/Boston_Tea_Party web page. It is hoped that this thesis will be beneficial for the students, especially English Department Students, to enlarge their translation knowledge of web page translation, especially which is translated with online machine translation. For the improvement of web page online translator technology, this research also recommends Google Translate to enrich its translation database and upgrade its engine of machine translation tool. Also, this research can be a consideration for internet users to use online translator service for translating a web page

    DiLogics: Creating Web Automation Programs With Diverse Logics

    Full text link
    Knowledge workers frequently encounter repetitive web data entry tasks, like updating records or placing orders. Web automation increases productivity, but translating tasks to web actions accurately and extending to new specifications is challenging. Existing tools can automate tasks that perform the same logical trace of UI actions (e.g., input text in each field in order), but do not support tasks requiring different executions based on varied input conditions. We present DiLogics, a programming-by-demonstration system that utilizes NLP to assist users in creating web automation programs that handle diverse specifications. DiLogics first semantically segments input data to structured task steps. By recording user demonstrations for each step, DiLogics generalizes the web macros to novel but semantically similar task requirements. Our evaluation showed that non-experts can effectively use DiLogics to create automation programs that fulfill diverse input instructions. DiLogics provides an efficient, intuitive, and expressive method for developing web automation programs satisfying diverse specifications

    Towards Query Logs for Privacy Studies: On Deriving Search Queries from Questions

    Get PDF
    Translating verbose information needs into crisp search queries is a phenomenon that is ubiquitous but hardly understood. Insights into this process could be valuable in several applications, including synthesizing large privacy-friendly query logs from public Web sources which are readily available to the academic research community. In this work, we take a step towards understanding query formulation by tapping into the rich potential of community question answering (CQA) forums. Specifically, we sample natural language (NL) questions spanning diverse themes from the Stack Exchange platform, and conduct a large-scale conversion experiment where crowdworkers submit search queries they would use when looking for equivalent information. We provide a careful analysis of this data, accounting for possible sources of bias during conversion, along with insights into user-specific linguistic patterns and search behaviors. We release a dataset of 7,000 question-query pairs from this study to facilitate further research on query understanding.Comment: ECIR 2020 Short Pape
    corecore