1,916 research outputs found
Procedural City Generation with Combined Architectures for Real-time Visualization
The work and research of this paper sought to build upon traditional city generation and simulation in creating a tool that both realistically simulates cities and their prominent features and also creates aesthetic and artistically rich cities using assets that combine several contemporary or near contemporary architectural styles. The major city features simulated are the surrounding terrain, road networks, individual buildings, and building placement. The tools used to both create and integrate these features were created in Houdini with Unreal Engine 5 as the intended final destination. This research was influenced by the city, town, and road networking of Ghost Recon:Wildlands. Both games exhibit successful creation and integration of cities in a real-time open world that creates a holistic and visually compelling experience. The software used in the development of this project were Houdini, Maya, Unreal Engine 5, and Zbrush, as well as Adobe Substance Designer, Substance Painter, and Photoshop. The city generation tool was built with the intent that it would be flexible. In this context flexibility refers to the capability to create many different kinds of city regions based on user specifications. Region size, road density and connectivity, and building types are examples of qualities of the city that can be directly controlled. The tool currently uses one set of city assets created with intent for use together and an overall design cohesion but is also built flexibly enough that new building assets could be included, only requiring the addition of building generators for the new set. Alternatively, assets developed with the current generation methods in mind could also be used to change the visual style of the city. Buildings were both generated and placed based on a district classification. Buildings were established as small residential, large residential, religious buildings, and government/commercial before being placed in appropriate locations in the city based on user district specifications
Non-determinism in the narrative structure of video games
PhD ThesisAt the present time, computer games represent a finite interactive system. Even in their more experimental forms, the number of possible interactions between player and NPCs (non-player characters) and among NPCs and the game world has a finite number and is led by a deterministic system in which events can therefore be predicted. This implies that the story itself, seen as the series of events that will unfold during gameplay, is a closed system that can be predicted a priori. This study looks beyond this limitation, and identifies the elements needed for the emergence of a non-finite, emergent narrative structure. Two major contributions are offered through this research. The first contribution comes in the form of a clear categorization of the narrative structures embracing all video game production since the inception of the medium. In order to look for ways to generate a non-deterministic narrative in games, it is necessary to first gain a clear understanding of the current narrative structures implemented and how their impact on users’ experiencing of the story. While many studies have observed the storytelling aspect, no attempt has been made to systematically distinguish among the different ways designers decide how stories are told in games. The second contribution is guided by the following research question: Is it possible to incorporate non-determinism into the narrative structure of computer games? The hypothesis offered is that non-determinism can be incorporated by means of nonlinear dynamical systems in general and Cellular Automata in particular
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, diverse, and high-quality data. Despite this, existing
open-source tools for LLM data processing remain limited and mostly tailored to
specific datasets, with an emphasis on the reproducibility of released data
over adaptability and usability, inhibiting potential applications. In
response, we propose a one-stop, powerful yet flexible and user-friendly LLM
data processing system named Data-Juicer. Our system offers over 50 built-in
versatile operators and pluggable tools, which synergize modularity,
composability, and extensibility dedicated to diverse LLM data processing
needs. By incorporating visualized and automatic evaluation capabilities,
Data-Juicer enables a timely feedback loop to accelerate data processing and
gain data insights. To enhance usability, Data-Juicer provides out-of-the-box
components for users with various backgrounds, and fruitful data recipes for
LLM pre-training and post-tuning usages. Further, we employ multi-facet system
optimization and seamlessly integrate Data-Juicer with both LLM and distributed
computing ecosystems, to enable efficient and scalable data processing.
Empirical validation of the generated data recipes reveals considerable
improvements in LLaMA performance for various pre-training and post-tuning
cases, demonstrating up to 7.45% relative improvement of averaged score across
16 LLM benchmarks and 16.25% higher win rate using pair-wise GPT-4 evaluation.
The system's efficiency and scalability are also validated, supported by up to
88.7% reduction in single-machine processing time, 77.1% and 73.1% less memory
and CPU usage respectively, and 7.91x processing acceleration when utilizing
distributed computing ecosystems. Our system, data recipes, and multiple
tutorial demos are released, calling for broader research centered on LLM data.Comment: Under continuous maintenance and updating; The system, refined data
recipes, and demos are at https://github.com/alibaba/data-juice
A Novel Machine Learning Classifier Based on a Qualia Modeling Agent (QMA)
This dissertation addresses a problem found in supervised machine learning (ML) classification, that the target variable, i.e., the variable a classifier predicts, has to be identified before training begins and cannot change during training and testing. This research develops a computational agent, which overcomes this problem. The Qualia Modeling Agent (QMA) is modeled after two cognitive theories: Stanovich\u27s tripartite framework, which proposes learning results from interactions between conscious and unconscious processes; and, the Integrated Information Theory (IIT) of Consciousness, which proposes that the fundamental structural elements of consciousness are qualia. By modeling the informational relationships of qualia, the QMA allows for retaining and reasoning-over data sets in a non-ontological, non-hierarchical qualia space (QS). This novel computational approach supports concept drift, by allowing the target variable to change ad infinitum without re-training while achieving classification accuracy comparable to or greater than benchmark classifiers. Additionally, the research produced a functioning model of Stanovich\u27s framework, and a computationally tractable working solution for a representation of qualia, which when exposed to new examples, is able to match the causal structure and generate new inferences
Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space
This major component of the research described in this thesis is 3D computer
graphics, specifically the realistic physics-based softbody simulation and
haptic responsive environments. Minor components include advanced
human-computer interaction environments, non-linear documentary storytelling,
and theatre performance. The journey of this research has been unusual because
it requires a researcher with solid knowledge and background in multiple
disciplines; who also has to be creative and sensitive in order to combine the
possible areas into a new research direction. [...] It focuses on the advanced
computer graphics and emerges from experimental cinematic works and theatrical
artistic practices. Some development content and installations are completed to
prove and evaluate the described concepts and to be convincing. [...] To
summarize, the resulting work involves not only artistic creativity, but
solving or combining technological hurdles in motion tracking, pattern
recognition, force feedback control, etc., with the available documentary
footage on film, video, or images, and text via a variety of devices [....] and
programming, and installing all the needed interfaces such that it all works in
real-time. Thus, the contribution to the knowledge advancement is in solving
these interfacing problems and the real-time aspects of the interaction that
have uses in film industry, fashion industry, new age interactive theatre,
computer games, and web-based technologies and services for entertainment and
education. It also includes building up on this experience to integrate Kinect-
and haptic-based interaction, artistic scenery rendering, and other forms of
control. This research work connects all the research disciplines, seemingly
disjoint fields of research, such as computer graphics, documentary film,
interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm
- …