4,845 research outputs found

    Coupling CAD and CFD codes within a virtual integration platform

    Get PDF
    The Virtual Integration Platform (VIP) is an essential component of the VIRTUE project. It provides a system for combining disparate numerical analysis methods into a simulation environment. The platform allows for defining process chains, allocating of which tools to be used, and assigning users to perform the individual tasks. The platform also manages the data that are imported into or generated within a process, so that a version history of input and output can be evaluated. Within the VIP, a re-usable template for a given process chain can be created. A process chain is composed of one or more smaller tasks. For each of these tasks, a selection of available tools can be allocated. The advanced scripting methods in the VIP use wrappers for managing the individual tools. A wrapper allows communication between the platform and the tool, and passes input and output data as necessary, in most cases without modifying the tool in any way. In this way, third-party tools may also be used without the need for access to source code or special modifications. The included case study demonstrates several advantages of using the integration platform. A parametric propeller design process couples CAD and CFD codes to adapt the propeller to given operating constraints. The VIP template helped eliminate common user errors, and captured enough expert knowledge so that the casual user could perform the given tasks with minimal guidance. Areas of improvements to in-house codes and to the overall process were identified while using the integration platform. Additionally, the process chain was designed to facilitate formal optimisation methods

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    Complete Semantics to empower Touristic Service Providers

    Full text link
    The tourism industry has a significant impact on the world's economy, contributes 10.2% of the world's gross domestic product in 2016. It becomes a very competitive industry, where having a strong online presence is an essential aspect for business success. To achieve this goal, the proper usage of latest Web technologies, particularly schema.org annotations is crucial. In this paper, we present our effort to improve the online visibility of touristic service providers in the region of Tyrol, Austria, by creating and deploying a substantial amount of semantic annotations according to schema.org, a widely used vocabulary for structured data on the Web. We started our work from Tourismusverband (TVB) Mayrhofen-Hippach and all touristic service providers in the Mayrhofen-Hippach region and applied the same approach to other TVBs and regions, as well as other use cases. The rationale for doing this is straightforward. Having schema.org annotations enables search engines to understand the content better, and provide better results for end users, as well as enables various intelligent applications to utilize them. As a direct consequence, the region of Tyrol and its touristic service increase their online visibility and decrease the dependency on intermediaries, i.e. Online Travel Agency (OTA).Comment: 18 pages, 6 figure

    Beyond XSPEC: Towards Highly Configurable Analysis

    Full text link
    We present a quantitative comparison between software features of the defacto standard X-ray spectral analysis tool, XSPEC, and ISIS, the Interactive Spectral Interpretation System. Our emphasis is on customized analysis, with ISIS offered as a strong example of configurable software. While noting that XSPEC has been of immense value to astronomers, and that its scientific core is moderately extensible--most commonly via the inclusion of user contributed "local models"--we identify a series of limitations with its use beyond conventional spectral modeling. We argue that from the viewpoint of the astronomical user, the XSPEC internal structure presents a Black Box Problem, with many of its important features hidden from the top-level interface, thus discouraging user customization. Drawing from examples in custom modeling, numerical analysis, parallel computation, visualization, data management, and automated code generation, we show how a numerically scriptable, modular, and extensible analysis platform such as ISIS facilitates many forms of advanced astrophysical inquiry.Comment: Accepted by PASP, for July 2008 (15 pages

    Slisp: A Flexible Software Toolkit for Hybrid, Embedded and Distributed Applications

    Get PDF
    We describe Slisp (pronounced ‘Ess-Lisp’), a hybrid Lisp–C programming toolkit for the development of scriptable and distributed applications. Computationally expensive operations implemented as separate C-coded modules are selectively compiled into a small Xlisp interpreter, then called as Lisp functions in a Lisp-coded program. The resulting hybrid program may run in several modes: as a stand-alone executable, embedded in a different C program, as a networked server accessed from another Slisp client, or as a networked server accessed from a C-coded client. Five years of experience with Slisp, as well experience with other scripting languages such as Tcl and Perl, are summarized. These experiences suggest that Slisp will be most useful for mid-sized applications in which the kinds of scripting and embeddability features provided by Tcl and Perl can be extended in an efficient manner to larger applications, while maintaining a well-defined standard (Common Lisp) for these extensions. In addition, the generality of Lisp makes Lisp a good candidate for an application-level communication language in distributed environments
    • …
    corecore