1,778 research outputs found

    Teaching operating system concepts using multimedia and internet

    Get PDF
    The prime objective of the thesis is to research and demonstrate the benefits and advantages of using Internet and multimedia tools for an interactive educational leaning experience. As we speak Internet is developing as a mainstream communication medium via personal computer as a tool at a breathtaking speed. The information technology field is a prime reason behind such phenomenon as it continues to mature and expand. In what is described as the information age , the students of information technology need to master and devour new complex technological concepts and ideas at faster rate than ever before. The traditional approach using the textbooks is not feasible due to their static, linear and often colorless nature. So there is a tremendous need to develop an interactive, fun and yet detailed and challenging educational experience. In next several chapters, the solution is presented as to how to tackle such a challenge or task, by using the Operating System concepts, specifically using Memory Management concepts as a case study. The concepts include topics such as logical vs. physical address space, etc

    Animating the evolution of software

    Get PDF
    The use and development of open source software has increased significantly in the last decade. The high frequency of changes and releases across a distributed environment requires good project management tools in order to control the process adequately. However, even with these tools in place, the nature of the development and the fact that developers will often work on many other projects simultaneously, means that the developers are unlikely to have a clear picture of the current state of the project at any time. Furthermore, the poor documentation associated with many projects has a detrimental effect when encouraging new developers to contribute to the software. A typical version control repository contains a mine of information that is not always obvious and not easy to comprehend in its raw form. However, presenting this historical data in a suitable format by using software visualisation techniques allows the evolution of the software over a number of releases to be shown. This allows the changes that have been made to the software to be identified clearly, thus ensuring that the effect of those changes will also be emphasised. This then enables both managers and developers to gain a more detailed view of the current state of the project. The visualisation of evolving software introduces a number of new issues. This thesis investigates some of these issues in detail, and recommends a number of solutions in order to alleviate the problems that may otherwise arise. The solutions are then demonstrated in the definition of two new visualisations. These use historical data contained within version control repositories to show the evolution of the software at a number of levels of granularity. Additionally, animation is used as an integral part of both visualisations - not only to show the evolution by representing the progression of time, but also to highlight the changes that have occurred. Previously, the use of animation within software visualisation has been primarily restricted to small-scale, hand generated visualisations. However, this thesis shows the viability of using animation within software visualisation with automated visualisations on a large scale. In addition, evaluation of the visualisations has shown that they are suitable for showing the changes that have occurred in the software over a period of time, and subsequently how the software has evolved. These visualisations are therefore suitable for use by developers and managers involved with open source software. In addition, they also provide a basis for future research in evolutionary visualisations, software evolution and open source development

    The ontomedia project : ODR, relational justice, multimedia

    Get PDF
    More than ever, the Web is a space of social interaction. Recent trends reveal that Internet users spend more time interacting within online communities than in checking and replying to e-mail. Online communities and institutions create new spaces for interaction, but also open new avenues for the emergence of grievances, claims, and disputes. Consequently, online dispute resolution (ODR) procedures are core to these new online worlds. But can ODR mechanisms provide sufficient levels of reputation, trust ,and enforceability so as to become mainstream? This contribution introduces the new approaches to ODR with an emphasis on the Ontomedia Project, which is currently developing a web-based platform to facilitate online mediation in different domains

    Heritage in the Limelight, a Collection in Progress: Uncovering, Connecting, Researching and Animating Australia's Magic Lantern Past

    Get PDF
    Once they are formed, the digital collections of cultural and collecting institutions do not exist in splendid isolation. As well as being aggregated data sets, digital heritage collections are also links to tangible objects and specific historical experiences. Digital collections may allow users to find the actual analogue objects from which they were derived, they may allow users to understand a particular historical experience (or a simulation of it), they may connect them to a particular place, or they may lead them to other digital collections. Digital heritage collections need to develop generous interfaces in order to maximise their unity across these different demands and to appeal to a variety of users. This article takes as its case study the digital database and interface made by the Australian-based research team, ‘Heritage in the Limelight: The Magic Lantern in Australia and the World’. It examines how the culture, ephemera and documentation around the magic lantern’s use in Australia across the nineteenth and twentieth century calls for its digital presentation in a dynamic, operational archive. The following piece surveys scholarly debates around digital collections that have informed the construction of the Heritage in the Limelight database and prototype Collection Explorer as well placing the creation of this Australian initiative in the context of work being done on lantern slide digital resources globally

    Creating Interactive Classrooms with Augmented Reality, a Review

    Get PDF
    Augmented reality (AR) is becoming an uprising technology that is still being applauded as a top-notch innovation in education and that helps teachers improve their classrooms. Thanks to one-to-one program support and the development of apps (apps), it is considered an affordable technology that gives teachers access to new ways of both supporting and teaching. With AR, teachers can have the support provided by multimedia while using the environment inside the classroom. There are multiple free cross-platform apps available for teacher usage. This article answers clearly what AR is, how it can be used to support students in schools, evaluates its educational utility

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVEℱ, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    WristSketcher: Creating Dynamic Sketches in AR with a Sensing Wristband

    Full text link
    Restricted by the limited interaction area of native AR glasses (e.g., touch bars), it is challenging to create sketches in AR glasses. Recent works have attempted to use mobile devices (e.g., tablets) or mid-air bare-hand gestures to expand the interactive spaces and can work as the 2D/3D sketching input interfaces for AR glasses. Between them, mobile devices allow for accurate sketching but are often heavy to carry, while sketching with bare hands is zero-burden but can be inaccurate due to arm instability. In addition, mid-air bare-hand sketching can easily lead to social misunderstandings and its prolonged use can cause arm fatigue. As a new attempt, in this work, we present WristSketcher, a new AR system based on a flexible sensing wristband for creating 2D dynamic sketches, featuring an almost zero-burden authoring model for accurate and comfortable sketch creation in real-world scenarios. Specifically, we have streamlined the interaction space from the mid-air to the surface of a lightweight sensing wristband, and implemented AR sketching and associated interaction commands by developing a gesture recognition method based on the sensing pressure points on the wristband. The set of interactive gestures used by our WristSketcher is determined by a heuristic study on user preferences. Moreover, we endow our WristSketcher with the ability of animation creation, allowing it to create dynamic and expressive sketches. Experimental results demonstrate that our WristSketcher i) faithfully recognizes users' gesture interactions with a high accuracy of 96.0%; ii) achieves higher sketching accuracy than Freehand sketching; iii) achieves high user satisfaction in ease of use, usability and functionality; and iv) shows innovation potentials in art creation, memory aids, and entertainment applications

    An analysis of the design principles as applied to static and animated Web sites with an application of the design principles to an experimental static and animated Web site

    Get PDF
    The World Wide Web is the fastest growing media global wide. Web design has become an increasingly sophisticated profession, and its standards and requirements have become more stringent. Static Web design is built upon a system of communication principles. By applying these principles, a non-animated Web site can send information very effectively. With the development of Internet technology, increasing demand for motion on the Web has been put forward by users and clients. From the early day\u27s tag, to today\u27s fully animated Web site, Web animation has progressed to the point that it is impossible for any book to cover this topic thoroughly. Animated Web design is a new term; many people do not know the difference between Web animation, flashing banner advertising, and motion design. Animation extends Web design language and methods. Animation makes information communication more effective and direct because of the visual impact and graphic nature of the animation. The purpose of this thesis is to examine the effectiveness of animation on the Web by studying the principles in both non-animated and animated Web design. In fact, it is very important to apply non-animated Web design principles to Web animation. The research also found that knowledge of traditional animation principles is a must for today\u27s Web designers. This research can be used as a guide for applying the design principles when creating Web animations. As technology and industry develops, further research in the area of Web animation is required. The design principles, however, will remain fundamental to the field of Web site design
    • 

    corecore