1,198 research outputs found

    Shaping the Future of Animation towards Role of 3D Simulation Technology in Animation Film and Television

    Get PDF
    The application of 3D simulation technology has revolutionized the field of animation film and television art, providing new possibilities and creative opportunities for visual storytelling. This research aims to explore the various aspects of applying 3D simulation technology in animation film and television art. It examines how 3D simulation technology enhances the creation of realistic characters, environments, and special effects, contributing to immersive and captivating storytelling experiences. The research also investigates the technical aspects of integrating 3D cloud simulation technology into the animation production pipeline, including modeling, texturing, rigging, and animation techniques. This paper explores the application of these optimization algorithms in the context of cloud-based 3D environments, focusing on enhancing the efficiency and performance of 3D simulations. Black Widow and Spider Monkey Optimization can be used to optimize the placement and distribution of 3D assets in cloud storage systems, improving data access and retrieval times. The algorithms can also optimize the scheduling of rendering tasks in cloud-based rendering pipelines, leading to more efficient and cost-effective rendering processes. The integration of 3D cloud environments and optimization algorithms enables real-time optimization and adaptation of 3D simulations. This allows for dynamic adjustments of simulation parameters based on changing conditions, resulting in improved accuracy and responsiveness. Moreover, it explores the impact of 3D cloud simulation technology on the artistic process, examining how it influences the artistic vision, aesthetics, and narrative possibilities in animation film and television. The research findings highlight the advantages and challenges of using 3D simulation technology in animation, shedding light on its potential future developments and its role in shaping the future of animation film and television art

    Scare Tactics

    Get PDF
    It is the purpose of this document to describe the design and development processes of Scare Tactics. The game will be discussed in further detail as it relates to several areas, such as market analysis, development process, game design, technical design, and each team members’ individual area of background research. The research areas include asymmetrical game design, level design, game engine architecture, real-time graphics, user interface design, networking and artificial intelligence. As part of the team’s market analysis, other games featuring asymmetric gameplay are discussed. The games described in this section serve as inspirations for asymmetric game design. Some of these games implement mechanics that the team seeks to emulate and expand upon in Scare Tactics. As part of the team’s development process, several concepts were prototyped over the course of two months. During that process the team adopted an Agile methodology in order to assist with scheduling, communication and resource management. Eventually, the team chose to expand upon the prototype that became the basis of Scare Tactics. Game design and technical design occur concurrently in the development of Scare Tactics. Designers conduct discussions where themes, settings, and mechanics are conceived and documented. Mechanics are prototyped in Unity and eventually ported to a proprietary engine developed by our team. Throughout the course of development, each team member has had to own an area of design or development. This has led to individual research performed in several areas, which will be discussed further in this document

    Development of generic scheduling concepts for OpenGL ES 2.0

    Get PDF
    The ability of a Graphics Processing Unit (GPU) to do efficient and massively parallel computations makes it the choice for 3D graphic applications. It is been extensively used as a hardware accelerator to boost the performance of a single application like 3D games. However, due to increasing number of 3D rendering applications and the limiting resource constraints (especially on embedded platforms), such as cost and space, a single GPU needs to be shared between multiple concurrent applications (GPU multitasking). Especially for safety-relevant scenarios, like, e.g., automotive applications, certain Quality of Service (QoS) requirements, such as average frame rates and priorities, apply. In this work we analyze and discuss the requirements and concepts for the scheduling of 3D rendering commands. We therefore propose our Fine-Grained Semantics Driven Scheduling (FG-SDS) concept. Since existing GPUs cannot be preempted, the execution of GPU command blocks is selectively delayed depending on the applications priorities and frame rate requirements. As FG-SDS supports and uses the OpenGL ES 2.0 rendering API it is highly portable and flexible. We have implemented FG-SGS and evaluated its performance and effectiveness on an automotive embedded system. Our evaluations indicate that FG-SGS is able to ensure that required frame rates and deadlines of the high priority application are met, if the schedule is feasible. The overhead introduced by GPU scheduling is non-negligible but considered to be reasonable with respect to the GPU resource prioritization that we are able to achieve

    2018-2019

    Get PDF
    Contains information on courses and class descriptions as well as campus resources at Collin College.https://digitalcommons.collin.edu/catalogs/1030/thumbnail.jp

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Smart Solutions: Smart Grid Demokit

    Get PDF
    Treball desenvolupat dins el marc del programa 'European Project Semester'.The purpose of this report is to justify the design choices of the smart grid demo kit. Something had to be designed to make a smart grid clear for people who have little knowledge about smart grids. The product had to be appealing and clear for people to understand. And eventually should be usable, for example, on an information market. The first part of the research consisted of looking how to shape the whole system. How the 'tiles' had to look to be interactive for users and what they should feature. One part of this was doing research to get to know more about the already existing knowledge amount users. Another research investigated what appeals the most to the users. After this, a concept was created in compliance with the group and the client. The concept consists of hexagonal tiles, each with a different function: houses, solar panels, wind turbines, factories and energy storages. These tiles are all different parts of a smart grid. When combining these tiles, it can be made clear to users how smart grids work. The tiles are fabricated using a combination of 3D printing and laser cutting. The tiles have laser cut symbols on top of them to show what part of the smart grid they are. Digital LED strips are on top of the tiles to show the direction of the energy flow, and the colors indicate if the tile is producing or consuming power from the grid. The tiles are connected to each other by the so called “grid blocks”. These blocks make up the central power grid and are also lighting up by LED strips. Each tile is equipped with a microcontroller which controls the LED strips and makes it possible for the different tiles to “talk” with each other. Using this, the central tile knows which tiles are connected to the system. The central tile controls all tiles and runs the simulation of the smart grid. For further development of the project, it can be investigated how to control and adjust the system from an external system, for example by a tablet. The final product consists of five tiles connected by seven grid blocks which show how a smart grid works

    Co-scheduling Real-time Tasks and Non Real-time Tasks Using Empirical Probability Distribution of Execution Time Requirements

    Get PDF
    We present a novel co-scheduling algorithm for real-time (RT) and non real-time response time sensitive (TS) tasks. Previous co-scheduling algorithms focussed on providing isolation to the tasks without considering the impact of scheduling of the RT tasks on the response times of the TS tasks. To best utilize the available processing capacity, the number of jobs qualifying for acceptable performance should be maximized. A good scheduling algorithm would reduce the deadline overrun times for soft real-time tasks and the response times for the TS tasks, while meeting deadline guarantees for the RT tasks. We present a formulation of optimal co-scheduling algorithm and show that such an algorithm would minimize the expected processor share of RT tasks at any instant. We propose Stochastic Processor Sharing (SPS) algorithm that uses the empirical probability distribution of execution times of the RT tasks to schedule the RT tasks such that their maximum expected processor share at any instant is minimized. We show theoretically and empirically that SPS provideds significant performance benefits in terms of reducing response times of TS jobs over current co-scheduling algorithms
    • …
    corecore