469,152 research outputs found

    Developing Interaction 3D Models for E-Learning Applications

    Get PDF
    Some issues concerning the development of interactive 3D models for e-learning applications are considered. Given that 3D data sets are normally large and interactive display demands high performance computation, a natural solution would be placing the computational burden on the client machine rather than on the server. Mozilla and Google opted for a combination of client-side languages, JavaScript and OpenGL, to handle 3D graphics in a web browser (Mozilla 3D and O3D respectively). Based on the O3D model, core web technologies are considered and an example of the full process involving the generation of a 3D model and their interactive visualization in a web browser is described. The challenging issue of creating realistic 3D models of objects in the real world is discussed and a method based on line projection for fast 3D reconstruction is presented. The generated model is then visualized in a web browser. The experiments demonstrate that visualization of 3D data in a web browser can provide quality user experience. Moreover, the development of web applications are facilitated by O3D JavaScript extension allowing web designers to focus on 3D contents generation

    Developing 3D contents for e-learning applications

    Get PDF
    Some issues concerning the development of interactive 3D models for e-learning applications are considered. Given that 3D data sets are normally large and interactive display demands high performance computation, a natural solution would be placing the computational burden on the client machine rather than on the server. Mozilla and Google opted for a combination of client-side languages, JavaScript and OpenGL, to handle 3D graphics in a web browser (Mozilla 3D and O3D respectively). Based on the O3D model, core web technologies are considered and an example of the full process involving the generation of a 3D model and their interactive visualization in a web browser is described. The challenging issue of creating realistic 3D models of objects in the real world is discussed and a method based on line projection for fast 3D reconstruction is presented. The generated model is then visualized in a web browser. The experiments demonstrate that visualization of 3D data in a web browser can provide quality user experience. Moreover, the development of web applications are facilitated by O3D JavaScript extension allowing web designers to focus on 3D contents generation

    Analysis of deployment techniques for webbased applications in SMEs

    Get PDF
    The Internet is no longer just a source for accessing information; it has become a valuable medium for social networking and software services. Web-browsers can now access entire software systems available online to provide the user with a range of services. The concept of software as a service(SAAS) was born out of this. The number of development techniques and frameworks for such web-applications has grown rapidly and much research and development has been carried out on advancing the capability of web scripting languages and web browsers. However a key part of the life-cycle of web-applications that has not received adequate attention is deployment. The deployment techniques chosen to deploy a web application can have a serious affect on the cost of maintenance and the quality of service for the end user. A SAAS modelled web application attempts to emulate a desktop software package experience. If a deployment process affects the availability and quality of service of a web-application then the core concept of this model is broken. This dissertation identifies approaches to designing a deployment process and the aspects that influence the quality of a deployment technique. A survey was circulated to a number of Irish small to medium sized enterprises (SME) that develop web-based software. The survey shows an overview of multiple deployment processes used by these SMEs. Using this information, along with a review of the available literature and a detailed case study of a typical SME deploying SAAS based products, the dissertation provides a critical analysis and evaluation of the current deployment techniques being used

    Knowledge-based life event model for e-government service integration with illustrative examples

    Full text link
    The advancement of information and communications technology and web services offers an opportunity for e-government service integration, which can help improve the availability and quality of services offered. However, few of the potential service integration applications have been adopted by governments to increase the accessibility of and satisfaction with government services and information for citizens. Recently, the 'life event' concept was introduced as the core element of integrating complexity of service delivery to improve the efficiency and reusability of e-government services, web-based information management systems. In addition, a semantic web-based ontology is considered to be the most powerful conceptual approach for dealing with challenges associated with developing seamless systems in distributed environments. Among these challenges are interoperability, which can be loosely defined as the technical capability for interoperation. Despite the conceptual emergence of semantic web-based ontology for life events, the question remains of what methodology to use when designing a semantic web-based ontology for life events. This paper proposes a semantic web-based ontology model for life events for e-government service integration created using a methodology that implements the model using the ontology modelling tool Protégé and evaluates the model using Pellet Reasoner and the SPARQL query language. In addition, this model is illustrated by two examples, the Saudi Arabia King Abdullah Scholarship and Hafiz, to show the advantages of integrated systems compared with standalone systems. These examples show that the new model can effectively support the integration of standalone e-government services automatically so that citizens do not need to manually execute individual services. This can significantly improve the accessibility of e-government services and citizen's satisfaction. © 2014-IOS Press

    National Urban Database and Access Portal Tool, NUDAPT

    Get PDF
    Based on the need for advanced treatments of high resolution urban morphological features (e.g., buildings, trees) in meteorological, dispersion, air quality and human exposure modeling systems for future urban applications, a new project was launched called the National Urban Database and Access Portal Tool (NUDAPT). NUDAPT is sponsored by the U.S. Environmental Protection Agency (USEPA) and involves collaborations and contributions from many groups including federal and state agencies and from private and academic institutions here and in other countries. It is designed to produce and provide gridded fields of urban canopy parameters for various new and advanced descriptions of model physics to improve urban simulations given the availability of new high-resolution data of buildings, vegetation, and land use. Additional information include gridded anthropogenic heating and population data is incorporated to further improve urban simulations and to encourage and facilitate decision support and application linkages to human exposure models. An important core-design feature is the utilization of web portal technology to enable NUDAPT to be a Community based system. This web-based portal technology will facilitate customizing of data handling and retrievals (http://www.nudapt.org). This article provides an overview of NUDAPT and several example applications

    How To Build Enterprise Data Models To Achieve Compliance To Standards Or Regulatory Requirements (and share data).

    Get PDF
    Sharing data between organizations is challenging because it is difficult to ensure that those consuming the data accurately interpret it. The promise of the next generation WWW, the semantic Web, is that semantics about shared data will be represented in ontologies and available for automatic and accurate machine processing of data. Thus, there is inter-organizational business value in developing applications that have ontology-based enterprise models at their core. In an ontology-based enterprise model, business rules and definitions are represented as formal axioms, which are applied to enterprise facts to automatically infer facts not explicitly represented. If the proposition to be inferred is a requirement from, say, ISO 9000 or Sarbanes-Oxley, inference constitutes a model-based proof of compliance. In this paper, we detail the development and application of the TOVE ISO 9000 Micro-Theory, a model of ISO 9000 developed using ontologies for quality management (measurement, traceability, and quality management system ontologies). In so doing, we demonstrate that when enterprise models are developed using ontologies, they can be leveraged to support business analytics problems - in particular, compliance evaluation - and are sharable

    A Linked Data Scalability Challenge: Concept Reuse Leads to Semantic Decay

    Get PDF
    The increasing amount of available Linked Data resources is laying the foundations for more advanced Semantic Web applications. One of their main limitations, however, remains the general low level of data quality. In this paper we focus on a measure of quality which is negatively affected by the increase of the available resources. We propose a measure of semantic richness of Linked Data concepts and we demonstrate our hypothesis that the more a concept is reused, the less semantically rich it becomes. This is a significant scalability issue, as one of the core aspects of Linked Data is the propagation of semantic information on the Web by reusing common terms. We prove our hypothesis with respect to our measure of semantic richness and we validate our model empirically. Finally, we suggest possible future directions to address this scalability problem.Postprin

    Development of Grid e-Infrastructure in South-Eastern Europe

    Full text link
    Over the period of 6 years and three phases, the SEE-GRID programme has established a strong regional human network in the area of distributed scientific computing and has set up a powerful regional Grid infrastructure. It attracted a number of user communities and applications from diverse fields from countries throughout the South-Eastern Europe. From the infrastructure point view, the first project phase has established a pilot Grid infrastructure with more than 20 resource centers in 11 countries. During the subsequent two phases of the project, the infrastructure has grown to currently 55 resource centers with more than 6600 CPUs and 750 TBs of disk storage, distributed in 16 participating countries. Inclusion of new resource centers to the existing infrastructure, as well as a support to new user communities, has demanded setup of regionally distributed core services, development of new monitoring and operational tools, and close collaboration of all partner institution in managing such a complex infrastructure. In this paper we give an overview of the development and current status of SEE-GRID regional infrastructure and describe its transition to the NGI-based Grid model in EGI, with the strong SEE regional collaboration.Comment: 22 pages, 12 figures, 4 table

    Estimating The Quality Of Data Using Provenance: A Case Study In Escience

    Get PDF
    Data quality assessment is a key factor in data-intensive domains. The data deluge is aggravated by an increasing need for interoperability and cooperation across groups and organizations. New alternatives must be found to select the data that best satisfy users' needs in a given context. This paper presents a strategy to provide information to support the evaluation of the quality of data sets. This strategy is based on combining metadata on the provenance of a data set (derived from workflows that generate it) and quality dimensions defined by the set's users, based on the desired context of use. Our solution, validated via a case study, takes advantage of a semantic model to preserve data provenance related to applications in a specific domain. © (2013) by the AIS/ICIS Administrative Office All rights reserved.214421451IBM,SAP University Alliances,Microsoft,DePaul University,Georgia State University - J. Mack Robinson College of Business,et alBallou, D., Modeling Information Manufacturing Systems to Determine Information Product Quality (1998) Manage. Sci, 44, pp. 462-484Barga, R.S., Digiampietri, L.A., Automatic capture and efficient storage of e-Science experiment provenance (2008) Concurr. Comput.□: Pract. Exper, 20 (5), pp. 419-429Batini, C., Scannapieco, M., (2006) Data Quality: Concepts, Methodologies and Techniques (Data-Centric Systems and Applications), , Springer-VerlagBlake, R., Mangiameli, P., The Effects and Interactions of Data Quality and Problem Complexity on Classification (2011) Journal of Data and Information Quality, 2 (2), pp. 1-28Chapman, A.D., (2005) Principles of Data Quality, , Global Biodiversity Information Facility, CopenhagenChen, P., Plale, B., Aktas, M.S., Temporal Representation for Scientific Data Provenance (2012) In Proc. 8th IEEE Int. Conf. On EScience 2012Cugler, D.C., Medeiros, C.B., Toledo, F., An architecture for retrieval of animal sound recordings based on context variables (2012) Concurrency and Computation - Practice and ExperienceDavies, J., Studer, R., Warren, P., (2006) Semantic Web Technologies: Trends and Research In Ontology-based Systems, , Wiley(2010) The Dublin Core Metadata Initiative, , http://dublincore.org/, DCMI, Available atDeVries, P.J., (2009) GeoSpecies Ontology, , http://bioportal.bioontology.org/ontologies/1247, Available at(2009) Darwin Core Task Group, , http://www.tdwg.org/standards/450/, DwC, Available atGoodchild, M.F., Li, L., Assuring the quality of volunteered geographic information (2012) Spatial Statistics, 1, pp. 110-120Hartig, O., Zhao, J., Using web data provenance for quality assessment (2009) In Proc. of the Workshop On Semantic Web and Provenance Management At ISWC(2011) The Kepler Project, , https://kepler-project.org/, Kepler, Available atKondo, A.A., Traceability in Food for Supply Chains (2007) In Proc. 3rd Int. Conf. On Web Information Systems and Technologies (WEBIST), pp. 121-127. , INSTICCLassila, O., Swick, R.R., (1999) Resource Description Framework (RDF) Model and Syntax SpecificationMalaverri, J.E.G., Medeiros, C.B., A Provenance-based Approach to Evaluate Data Quality in eScience (2013) Int. J. Metadata, Semantics and Ontology - Special Issue On Metadata For E-science and E-researchMoreau, L., The Open Provenance Model core specification (v1.1) (2011) Future Generation Comp. Syst, 27 (6), pp. 743-756Parssian, A., Managerial decision support with knowledge of accuracy and completeness of the relational aggregate functions (2006) Decis. Support Syst, 42, pp. 1494-1502Pernici, B., Scannapieco, M., Data Quality in Web Information Systems (2002) In Proc. of the 21st Int. Conf. On Conceptual Modeling, pp. 397-413. , Springer-VerlagPipino, L.L., Lee, Y.W., Wang, R.Y., Data Quality Assessment (2002) Commun. ACM, 45, pp. 211-218Prat, N., Madnick, S., Measuring Data Believability: A Provenance Approach (2008) Proc. of the 41st Hawaii Int. Conf. On System Sciences, p. 393Richard, Y., Diane, M., Beyond accuracy□: What data quality means to data consumers (1996) Journal of ManagementSahoo, S.S., Sheth, A.P., Henson, C.A., Semantic Provenance for eScience: Managing the Deluge of Scientific Data (2008) IEEE Internet Computing, 12 (4), pp. 46-54Simmhan, Y., Plale, B., Using Provenance for Personalized Quality Ranking of Scientific Datasets (2011) I. J. Comput. Appl, 18 (3), pp. 180-195(2009) The Taverna Project, , http://www.taverna.org.uk/, Taverna, Available at(2011) The VisTrails Project, , http://www.vistrails.org, VisTrails, Available at(2012) The PROV Ontology, , http://www.w3.org/TR/prov-o/, W3C, Available atWang, X., Gorlitsky, R., Almeida, J.S., From XML to RDF: How semantic web technologies will change the design of omic standards (2005) Nat Biotech, 23 (9), pp. 1099-1103Yeganeh, S.H., Hassanzadeh, O., Miller, R.J., Linking Semistructured Data on the Web (2011) In Proc. 14th Int. Workshop On the Web and DatabasesZhao, J., Mining Taverna's semantic web of provenance (2008) Concurr. Comput.□: Pract. Exper, 20, pp. 463-47

    High-Performance Cloud Computing: A View of Scientific Applications

    Full text link
    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape
    • …
    corecore