5,436 research outputs found

    GAMESPECT: A Composition Framework and Meta-Level Domain Specific Aspect Language for Unreal Engine 4

    Get PDF
    Game engine programming involves a great number of software components, many of which perform similar tasks; for example, memory allocation must take place in the renderer as well as in the creation routines while other tasks such as error logging must take place everywhere. One area of all games which is critical to the success of the game is that of game balance and tuning. These balancing initiatives cut across all areas of code from the player and AI to the mission manager. In computer science, we’ve come to call these types of concerns “cross cutting”. Aspect oriented programming was developed, in part, to solve the problems of cross cutting: employing “advice” which can be incorporated across different pieces of functionality. Yet, despite the prevalence of a solution, very little work has been done to bring cross cutting to game engine programming. Additionally, the discipline involves a heavy amount of code rewriting and reuse while simultaneously relying on many common design patterns that are copied from one project to another. In the case of game balance, the code may be wildly different across two different games despite the fact that similar tasks are being done. These two problems are exacerbated by the fact that almost every game engine has its own custom DSL (domain specific language) unique to that situation. If a DSL could showcase the areas of cross cutting concerns while highlighting the ability to capture design patterns that can be used across games, significant productivity savings could be achieved while simultaneously creating a common thread for discussion of shared problems within the domain. This dissertation sought to do exactly that- create a metalanguage called GAMESPECT which supports multiple styles of DSLs while bringing aspect-oriented programming into the DSL’s to make them DSAL (domain specific aspect languages). The example cross cutting concern was game balance and tuning since it’s so pervasive and important to gaming. We have created GAMESPECT as a language and a composition framework which can assist engine developers and game designers in balancing their games, forming one central place for game balancing concerns even while these concerns may cross different languages and locations inside the source code. Generality was measured by showcasing the composition specifications in multiple contexts and languages. In addition to evaluating generality and performance metrics, effectiveness was be measured. Specifically, comparisons were made between a balancing initiative when performed with GAMESPECT vs a traditional methodology. In doing so, this work shows a clear advantage to using a Metalanguage such as GAMESPECT for this task. In general, a line of code reduction of 9-40% per task was achieved with negligible effects to performance. The use of a metalanguage in Unreal Engine 4 is a starting point to further discussions concerning other game engines. In addition, this work has implications beyond video game programming. The work described highlights benefits which might be achieved in other disciplines where design pattern implementations and cross-cutting concern usage is high; the real time simulation field and the field of Windows GUI programming are two examples of future domains

    A Data-driven, High-performance and Intelligent CyberInfrastructure to Advance Spatial Sciences

    Get PDF
    abstract: In the field of Geographic Information Science (GIScience), we have witnessed the unprecedented data deluge brought about by the rapid advancement of high-resolution data observing technologies. For example, with the advancement of Earth Observation (EO) technologies, a massive amount of EO data including remote sensing data and other sensor observation data about earthquake, climate, ocean, hydrology, volcano, glacier, etc., are being collected on a daily basis by a wide range of organizations. In addition to the observation data, human-generated data including microblogs, photos, consumption records, evaluations, unstructured webpages and other Volunteered Geographical Information (VGI) are incessantly generated and shared on the Internet. Meanwhile, the emerging cyberinfrastructure rapidly increases our capacity for handling such massive data with regard to data collection and management, data integration and interoperability, data transmission and visualization, high-performance computing, etc. Cyberinfrastructure (CI) consists of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high-performance networks to improve research productivity and enable breakthroughs that are not otherwise possible. The Geospatial CI (GCI, or CyberGIS), as the synthesis of CI and GIScience has inherent advantages in enabling computationally intensive spatial analysis and modeling (SAM) and collaborative geospatial problem solving and decision making. This dissertation is dedicated to addressing several critical issues and improving the performance of existing methodologies and systems in the field of CyberGIS. My dissertation will include three parts: The first part is focused on developing methodologies to help public researchers find appropriate open geo-spatial datasets from millions of records provided by thousands of organizations scattered around the world efficiently and effectively. Machine learning and semantic search methods will be utilized in this research. The second part develops an interoperable and replicable geoprocessing service by synthesizing the high-performance computing (HPC) environment, the core spatial statistic/analysis algorithms from the widely adopted open source python package – Python Spatial Analysis Library (PySAL), and rich datasets acquired from the first research. The third part is dedicated to studying optimization strategies for feature data transmission and visualization. This study is intended for solving the performance issue in large feature data transmission through the Internet and visualization on the client (browser) side. Taken together, the three parts constitute an endeavor towards the methodological improvement and implementation practice of the data-driven, high-performance and intelligent CI to advance spatial sciences.Dissertation/ThesisDoctoral Dissertation Geography 201

    A Multi-Code Analysis Toolkit for Astrophysical Simulation Data

    Full text link
    The analysis of complex multiphysics astrophysical simulations presents a unique and rapidly growing set of challenges: reproducibility, parallelization, and vast increases in data size and complexity chief among them. In order to meet these challenges, and in order to open up new avenues for collaboration between users of multiple simulation platforms, we present yt (available at http://yt.enzotools.org/), an open source, community-developed astrophysical analysis and visualization toolkit. Analysis and visualization with yt are oriented around physically relevant quantities rather than quantities native to astrophysical simulation codes. While originally designed for handling Enzo's structure adaptive mesh refinement (AMR) data, yt has been extended to work with several different simulation methods and simulation codes including Orion, RAMSES, and FLASH. We report on its methods for reading, handling, and visualizing data, including projections, multivariate volume rendering, multi-dimensional histograms, halo finding, light cone generation and topologically-connected isocontour identification. Furthermore, we discuss the underlying algorithms yt uses for processing and visualizing data, and its mechanisms for parallelization of analysis tasks.Comment: 18 pages, 6 figures, emulateapj format. Resubmitted to Astrophysical Journal Supplement Series with revisions from referee. yt can be found at http://yt.enzotools.org

    Processing Structured Hypermedia : A Matter of Style

    Get PDF
    With the introduction of the World Wide Web in the early nineties, hypermedia has become the uniform interface to the wide variety of information sources available over the Internet. The full potential of the Web, however, can only be realized by building on the strengths of its underlying research fields. This book describes the areas of hypertext, multimedia, electronic publishing and the World Wide Web and points out fundamental similarities and differences in approaches towards the processing of information. It gives an overview of the dominant models and tools developed in these fields and describes the key interrelationships and mutual incompatibilities. In addition to a formal specification of a selection of these models, the book discusses the impact of the models described on the software architectures that have been developed for processing hypermedia documents. Two example hypermedia architectures are described in more detail: the DejaVu object-oriented hypermedia framework, developed at the VU, and CWI's Berlage environment for time-based hypermedia document transformations

    From SpaceStat to CyberGIS: Twenty Years of Spatial Data Analysis Software

    Get PDF
    This essay assesses the evolution of the way in which spatial data analytical methods have been incorporated into software tools over the past two decades. It is part retrospective and prospective, going beyond a historical review to outline some ideas about important factors that drove the software development, such as methodological advances, the open source movement and the advent of the internet and cyberinfrastructure. The review highlights activities carried out by the author and his collaborators and uses SpaceStat, GeoDa, PySAL and recent spatial analytical web services developed at the ASU GeoDa Center as illustrative examples. It outlines a vision for a spatial econometrics workbench as an example of the incorporation of spatial analytical functionality in a cyberGIS.

    Domain Specific Languages versus Frameworks

    Get PDF
    Denne masteroppgaven sammenlingner domene spesifikke sprÄk med rammeverk. Problemet er undersÞkt ved Ä implementere et domene spesifikt sprÄk og et rammeverk for Ä lage elektroniske internettbutikker. Den fÞrste delen av problemstillingen handler om sammenlingingen av de to teknikkene. Den andre delen tar for seg om det gÄr ann Ä lage en statisk semantisk analysator for et rammeverk. Eksperimentet gjort er beskrevet i detalj. Den fÞrste delen beskriver en domene analyse og designet for bÄde det domene spesifikke sprÄket og for rammeverket. Eksperimentet for den andre delen ser om det gÄr ann Ä modifisere rammeverket slik at det sÞttet statisk semantisk analyse. Masteroppgaven foreslÄr at en betydelig fordel med det domene spesifikke sprÄket er evnen til Ä implementere en domenemodel nÞyaktig. En betydelig fordel for rammeverket forelÄes Ä vÊre at det bÄde er implementert og brukes i et generelt programmeringssprÄk

    GUISET: A CONCEPTUAL DESIGN OF A GRID-ENABLED PORTAL FOR E-COMMERCE ON-DEMAND SERVICES

    Get PDF
    Conventional grid-enabled portal designs have been largely influenced by the usual functional requirements such as security requirements, grid resource requirements and job management requirements. However, the pay-as-you-use service provisioning model of utility computing platforms mean that additional requirements must be considered in order to realize effective grid-enabled portals design for such platforms. This work investigates those relevant additional requirements that must be considered for the design of grid-enabled portals for utility computing contexts. Based on a thorough review of literature, we identified a number of those relevant additional requirements, and developed a grid-enabled portal prototype for the Grid-based Utility Infrastructure for SMME-enabling Technology (GUISET) initiative – a utility computing platform. The GUISET portal was designed to cater for both the traditional grid requirements and some of the relevant additional requirements for utility computing contexts. The result of the evaluation of the GUISET portal prototype using a set of benchmark requirements (standards) revealed that it fulfilled the minimum requirements to be suitable for the utility context

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Extending the remit of evidence-based policing

    Get PDF
    Evidence-based policing (EBP) is an important strand of the UK’s College of Policing’s Police Education Qualifications Framework (PEQF), itself a component of a professionalisation agenda. This article argues that the two dominant approaches to EBP, experimental criminology and crime science, offer limited scope for the development of a comprehensive knowledge base for policing. Although both approaches share a common commitment to the values of science, each recognizes their limited coverage of policing topics. The fundamental difference between them is what each considers ‘best’ evidence. This article critically examines the generation of evidence by these two approaches and proposes an extension to the range of issues EBP should cover by utilizing a greater plurality of methods to exploit relevant research. Widening the scope of EBP would provide a broader foundational framework for inclusion in the PEQF and offers the potential for identifying gaps in the research, constructing blocks for knowledge building, and syllabus development in higher level police education
    • 

    corecore