5,709 research outputs found

    Intelligent composite layup by the application of low cost tracking and projection technologies

    Get PDF
    Hand layup is still the dominant forming process for the creation of the widest range of complex geometry and mixed material composite parts. However, this process is still poorly understood and informed, limiting productivity. This paper seeks to address this issue by proposing a novel and low cost system enabling a laminator to be guided in real-time, based on a predetermined instruction set, thus improving the standardisation of produced components. Within this paper the current methodologies are critiqued and future trends are predicted, prior to introducing the required input and outputs, and developing the implemented system. As a demonstrator a U-Shaped component typical of the complex geometry found in many difficult to manufacture composite parts was chosen, and its drapeability assessed by the use of a kinematic drape simulation tool. An experienced laminator's knowledgebase was then used to divide the tool into a finite number of features, with layup conducted by projecting and sequentially highlighting target features while tracking a laminator's hand movements across the ply. The system has been implemented with affordable hardware and demonstrates tangible benefits in comparison to currently employed laser-based systems. It has shown remarkable success to date, with rapid Technology Readiness Level advancement. This is a major stepping stone towards augmenting manual labour, with further benefits including more appropriate automation

    Multi-Projector Content Preservation with Linear Filters

    Get PDF
    Using aligned overlapping image projectors provides several ad-vantages when compared to a single projector: increased bright-ness, additional redundancy, and increased pixel density withina region of the screen. Aligning content between projectors isachieved by applying space transformation operations to the de-sired output. The transformation operations often degrade the qual-ity of the original image due to sampling and quantization. Thetransformation applied for a given projector is typically done in iso-lation of all other content-projector transformations. However, it ispossible to warp the images with prior knowledge of each othersuch that they utilize the increase in effective pixel density. Thisallows for an increase in the perceptual quality of the resultingstacked content. This paper presents a novel method of increas-ing the perceptual quality within multi-projector configurations. Amachine learning approach is used to train a linear filtering basedmodel that conditions the individual projected images on each othe

    Conceptual design study for an advanced cab and visual system, volume 2

    Get PDF
    The performance, design, construction and testing requirements are defined for developing an advanced cab and visual system. The rotorcraft system integration simulator is composed of the advanced cab and visual system and the rotorcraft system motion generator, and is part of an existing simulation facility. User's applications for the simulator include rotorcraft design development, product improvement, threat assessment, and accident investigation

    A compressive light field projection system

    Get PDF
    For about a century, researchers and experimentalists have strived to bring glasses-free 3D experiences to the big screen. Much progress has been made and light field projection systems are now commercially available. Unfortunately, available display systems usually employ dozens of devices making such setups costly, energy inefficient, and bulky. We present a compressive approach to light field synthesis with projection devices. For this purpose, we propose a novel, passive screen design that is inspired by angle-expanding Keplerian telescopes. Combined with high-speed light field projection and nonnegative light field factorization, we demonstrate that compressive light field projection is possible with a single device. We build a prototype light field projector and angle-expanding screen from scratch, evaluate the system in simulation, present a variety of results, and demonstrate that the projector can alternatively achieve super-resolved and high dynamic range 2D image display when used with a conventional screen.MIT Media Lab ConsortiumNatural Sciences and Engineering Research Council of Canada (NSERC Postdoctoral Fellowship)National Science Foundation (U.S.) (Grant NSF grant 0831281

    Micro Fourier Transform Profilometry (μ\muFTP): 3D shape measurement at 10,000 frames per second

    Full text link
    Recent advances in imaging sensors and digital light projection technology have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces of complex-shaped objects to be captured with improved resolution and accuracy. However, due to the large number of projection patterns required for phase recovery and disambiguation, the maximum fame rates of current 3D shape measurement techniques are still limited to the range of hundreds of frames per second (fps). Here, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μ\muFTP), which can capture 3D surfaces of transient events at up to 10,000 fps based on our newly developed high-speed fringe projection system. Compared with existing techniques, μ\muFTP has the prominent advantage of recovering an accurate, unambiguous, and dense 3D point cloud with only two projected patterns. Furthermore, the phase information is encoded within a single high-frequency fringe image, thereby allowing motion-artifact-free reconstruction of transient events with temporal resolution of 50 microseconds. To show μ\muFTP's broad utility, we use it to reconstruct 3D videos of 4 transient scenes: vibrating cantilevers, rotating fan blades, bullet fired from a toy gun, and balloon's explosion triggered by a flying dart, which were previously difficult or even unable to be captured with conventional approaches.Comment: This manuscript was originally submitted on 30th January 1

    Changing Light: a plethora of digital tools as slides gasp their last?

    Get PDF
    The title 'Changing Light' reflects the enormous changeover from analogue slides to digital images, both a cultural shift and a physical shift down to the change in light from the smoky beams of dual slide projectors piercing the dark of a classroom, to the bright white classrooms of the digital age. The evidence for the 'death of slides' has been mounting for a number of years and reported by visual resources curators in the US and the UK. In 2005 JISC funded AHDS Visual Arts to report on 'the effects of the digital image revolution on the UK arts education community'; the Association of Curators of Art and Design Images (ACADI), the Association of Art Historians (AAH), and the Art Libraries Society (ARLIS/UK & Ireland) contributed significantly to the Digital Picture initiative. However some of the issues highlighted by the final report are yet to be addressed such as provision of copyright-cleared digital images for use in education. This paper considers what arts education stands to lose from the 'death of slides' in the context of digital images and the plethora of digital presentation tools. As well as a change in light, there is a change from the physical tangible slide technology to the virtual digital image and computing in the cloud

    Method and apparatus for calibrating a display using an array of cameras

    Get PDF
    The present invention overcomes many of the disadvantages of the prior art by providing a display that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, the present invention provides one or more cameras to capture an image that is projected on a display screen. In one embodiment, the one or more cameras are placed on the same side of the screen as the projectors. In another embodiment, an array of cameras is provided on either or both sides of the screen for capturing a number of adjacent and/or overlapping capture images of the screen. In either of these embodiments, the resulting capture images are processed to identify any non-desirable characteristics including any visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and/or other visible artifacts

    Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces

    Get PDF
    The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher

    Adaptive, spatially-varying aberration correction for real-time holographic projectors.

    Get PDF
    A method of generating an aberration- and distortion-free wide-angle holographically projected image in real time is presented. The target projector is first calibrated using an automated adaptive-optical mechanism. The calibration parameters are then fed into the hologram generation program, which applies a novel piece-wise aberration correction algorithm. The method is found to offer hologram generation times up to three orders of magnitude faster than the standard method. A projection of an aberration- and distortion-free image with a field of view of 90x45 degrees is demonstrated. The implementation on a mid-range GPU achieves high resolution at a frame rate up to 12fps. The presented methods are automated and can be performed on any holographic projector.Engineering and Physical Sciences Research CouncilThis is the final version of the article. It first appeared from the Optical Society of America via https://doi.org/10.1364/OE.24.01574

    Synchronised slide presentations

    Get PDF
    This particular paper has been drawn from the dissertation Audiovisuals: evaluation and planning by Tony Lilleby (1985} .Audio visual presentations play a major role in the fields of education, advertising and tourism. The ability of this medium to communicate and entertain has led to its use in many of New Zealand's museums and national parks, with the techniques of production being taught in some universities and technical colleges. The development of high-tech photographic hardware has opened new frontiers of presentation and reliability and slide productions are now a serious competitor against moving film and video. However, the different mediums share the necessary investment of time, careful and creative planning, and the coordination of the multiple talents a production team requires. This dissertation is intended to provide an understanding of one aspect of the art of audio visual production - synchronised slide programmes. The strengths of this medium are evaluated and the planning, design and production procedures are outlined. This will prove particularly useful to those who are contemplating the commissioning or construction of an audio visual presentation which incorporates slide presentations
    corecore