15,108 research outputs found

    Challenges to describe QoS requirements for web services quality prediction to support web services interoperability in electronic commerce

    Get PDF
    Quality of service (QoS) is significant and necessary for web service applications quality assurance. Furthermore, web services quality has contributed to the successful implementation of Electronic Commerce (EC) applications. However, QoS is still the big issue for web services research and remains one of the main research questions that need to be explored. We believe that QoS should not only be measured but should also be predicted during the development and implementation stages. However, there are challenges and constraints to determine and choose QoS requirements for high quality web services. Therefore, this paper highlights the challenges for the QoS requirements prediction as they are not easy to identify. Moreover, there are many different perspectives and purposes of web services, and various prediction techniques to describe QoS requirements. Additionally, the paper introduces a metamodel as a concept of what makes a good web service

    Ontology of core data mining entities

    Get PDF
    In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines themost essential datamining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend

    Designing Improved Sediment Transport Visualizations

    Get PDF
    Monitoring, or more commonly, modeling of sediment transport in the coastal environment is a critical task with relevance to coastline stability, beach erosion, tracking environmental contaminants, and safety of navigation. Increased intensity and regularity of storms such as Superstorm Sandy heighten the importance of our understanding of sediment transport processes. A weakness of current modeling capabilities is the ability to easily visualize the result in an intuitive manner. Many of the available visualization software packages display only a single variable at once, usually as a two-dimensional, plan-view cross-section. With such limited display capabilities, sophisticated 3D models are undermined in both the interpretation of results and dissemination of information to the public. Here we explore a subset of existing modeling capabilities (specifically, modeling scour around man-made structures) and visualization solutions, examine their shortcomings and present a design for a 4D visualization for sediment transport studies that is based on perceptually-focused data visualization research and recent and ongoing developments in multivariate displays. Vector and scalar fields are co-displayed, yet kept independently identifiable utilizing human perception\u27s separation of color, texture, and motion. Bathymetry, sediment grain-size distribution, and forcing hydrodynamics are a subset of the variables investigated for simultaneous representation. Direct interaction with field data is tested to support rapid validation of sediment transport model results. Our goal is a tight integration of both simulated data and real world observations to support analysis and simulation of the impact of major sediment transport events such as hurricanes. We unite modeled results and field observations within a geodatabase designed as an application schema of the Arc Marine Data Model. Our real-world focus is on the Redbird Artificial Reef Site, roughly 18 nautical miles offshor- Delaware Bay, Delaware, where repeated surveys have identified active scour and bedform migration in 27 m water depth amongst the more than 900 deliberately sunken subway cars and vessels. Coincidently collected high-resolution multibeam bathymetry, backscatter, and side-scan sonar data from surface and autonomous underwater vehicle (AUV) systems along with complementary sub-bottom, grab sample, bottom imagery, and wave and current (via ADCP) datasets provide the basis for analysis. This site is particularly attractive due to overlap with the Delaware Bay Operational Forecast System (DBOFS), a model that provides historical and forecast oceanographic data that can be tested in hindcast against significant changes observed at the site during Superstorm Sandy and in predicting future changes through small-scale modeling around the individual reef objects

    An information retrieval approach to ontology mapping

    Get PDF
    In this paper, we present a heuristic mapping method and a prototype mapping system that support the process of semi-automatic ontology mapping for the purpose of improving semantic interoperability in heterogeneous systems. The approach is based on the idea of semantic enrichment, i.e., using instance information of the ontology to enrich the original ontology and calculate similarities between concepts in two ontologies. The functional settings for the mapping system are discussed and the evaluation of the prototype implementation of the approach is reported. \ud \u

    The Building Information Model and the IFC standard: analysis of the characteristics necessary for the acoustic and energy simulation of buildings

    Get PDF
    The new European Directive 2014/24 / EU requires for all member States the use of BIM procedures in the construction of public buildings. The countries belonging to the European Union shall be obliged to transpose the Directive and adapt their procedures to that effect. The paper analyzes the IFC format, the only recognized by the European Directive Standards for BIM procedures, in order to assess its use for simulations of buildings. IFC, described by the ISO 16739 (2013), is today a standard that describes the topology of the constructive elements of the building and what belongs to it overall. The format includes geometrical information on the room and on all building components, including details of the type for performance (transmittance, fire resistance, sound insulation), in other words it is an independent object file for the software producers to which, according to the European Directive, it will be compulsory to refer in the near future, during the different stages of the life of a building from the design phase, to management and possible demolition at the end of life. The IFC initiative began in 1994, when an industry consortium invested in the development of a set of C ++ classes that can support the development of integrated applications. Twelve US companies joined the consortium: these companies that were included initially are called the consortium "Industry Alliance for Interoperability". In September 1995 the Alliance opened up membership to all interested parties, and in 1997 changed its name to "International Alliance for Interoperability". The new alliance was reconstituted as a non-profit organization, with the aim of developing and promoting the '' Industry Foundation Class "(IFC) as a neutral data model for the building product that were useful to gather information throughout the life cycle of a building facility. Since 2005 the Alliance has been carrying out its activities through its national chapters called SMART building. The present study aims at evaluating the IFC, comparing the information and data contained in it, with other formats already used for energy simulations of buildings such as the gbXML (Green Building XML), highlighting the missing required information and proposing the inclusion of new ones to issue the energy and acoustic simulation. More generally the attention is focused to building physics simulation software devoted to exploit the BIM model potential enabling interoperability

    Towards Semantic Integration of Heterogeneous Sensor Data with Indigenous Knowledge for Drought Forecasting

    Full text link
    In the Internet of Things (IoT) domain, various heterogeneous ubiquitous devices would be able to connect and communicate with each other seamlessly, irrespective of the domain. Semantic representation of data through detailed standardized annotation has shown to improve the integration of the interconnected heterogeneous devices. However, the semantic representation of these heterogeneous data sources for environmental monitoring systems is not yet well supported. To achieve the maximum benefits of IoT for drought forecasting, a dedicated semantic middleware solution is required. This research proposes a middleware that semantically represents and integrates heterogeneous data sources with indigenous knowledge based on a unified ontology for an accurate IoT-based drought early warning system (DEWS).Comment: 5 pages, 3 figures, In Proceedings of the Doctoral Symposium of the 16th International Middleware Conference (Middleware Doct Symposium 2015), Ivan Beschastnikh and Wouter Joosen (Eds.). ACM, New York, NY, US

    Sustainable Design of Buildings using Semantic BIM and Semantic Web Services

    Get PDF
    In response to the growing concerns about climate change and the environment, sustainable design of buildings is increasingly demanded by building owners and users. However, fast evaluation of various design options and identification of the optimized design requires application of design analysis tools such as energy modeling, daylight simulations, and natural ventilation analysis software. Energy analysis requires access to distributed sources of information such as building element material properties provided by designers, mechanical equipment information provided by equipment manufacturers, weather data provided by weather reporting agencies, and energy cost data from energy providers. Gathering energy related information from different sources and inputting the information to an energy analysis application is a time consuming process. This causes delays and increases the time for comparing different design alternatives. This paper discusses how Semantic Web technology can facilitate information collection from several sources for energy analysis. Semantic Web enables sharing, accessing, and combining information over the Internet in a machine process-able format. This would free building designers to concentrate on building design optimization rather than spending time on data preparation and manual entry into energy analysis applications
    corecore