10,591 research outputs found

    The use of volumetric projections in Digital Human Modelling software for the identification of large goods vehicle blind spots

    Get PDF
    The aim of the study is to understand the nature of blind spots in the vision of drivers of Large Goods Vehicles caused by vehicle design variables such as the driver eye height, and mirror designs. The study was informed by the processing of UK national accident data using cluster analysis to establish if vehicle blind spots contribute to accidents. In order to establish the cause and nature of blind spots six top selling trucks in the UK, with a range of sizes were digitized and imported into the SAMMIE Digital Human Modelling (DHM) system. A novel CAD based vision projection technique, which has been validated in a laboratory study, allowed multiple mirror and window aperture projections to be created, resulting in the identification and quantification of a key blind spot. The identified blind spot was demonstrated to have the potential to be associated with the scenarios that were identified in the accident data. The project led to the revision of UNECE Regulation 46 that defines mirror coverage in the European Union, with new vehicle registrations in Europe being required to meet the amended standard after June of 2015

    Technological Advancement for Vehicles Operable by the Visually Impaired

    Get PDF
    This senior project aims to provide blind persons with the ability to effectively experience driving. This report includes the project background, literature review, designs, methodologies, results, and conclusions with project management, human factors engineering, and electronic manufacturing focuses. Other universities and professionals have accepted the Blind Driver Challenge presented by the National Federation of the Blind (NFB) or studied systems to improve vehicle feedback. The Virginia Tech vehicle, named Odin , includes tactile and audio interfaces in order to relay information to a blind driver about vehicle heading and speed. The QFD results reveal that the amount of available information from the feedback systems ranks the most important aspect of this project\u27s designs. The QFD slso shows the importance of both speed and acceleration. The final feedback designs of the vibrating vest, steering wheel, and audio provide commands, statuses, and speed updates. The programs packaged with the SICK LIDAR sensor as well as LabVIEW will serve to accomplish the necessary programming. This project contains two expensive items that push its total cost fairly high, the dune buggy and the laser scanner. Considering the over 1000 feet of electrical wire, electrical safety signifies a very large safety concern. Innovative sensor and tactile feedback technology provide the backbone for this advancement for the visually impaired

    A Vision Based Top-View Transformation Model for a Vehicle Parking Assistant

    Get PDF
    This paper proposes the Top-View Transformation Model for image coordinate transformation, which involves transforming a perspective projection image into its corresponding bird's eye vision. A fitting parameters searching algorithm estimates the parameters that are used to transform the coordinates from the source image. Using this approach, it is not necessary to provide any interior and exterior orientation parameters of the camera. The designed car parking assistant system can be installed at the rear end of the car, providing the driver with a clearer image of the area behind the car. The processing time can be reduced by storing and using the transformation matrix estimated from the first image frame for a sequence of video images. The transformation matrix can be stored as the Matrix Mapping Table, and loaded into the embedded platform to perform the transformation. Experimental results show that the proposed approaches can provide a clearer and more accurate bird's eye view to the vehicle driver

    Spartan Daily, September 28, 2006

    Get PDF
    Volume 127, Issue 19https://scholarworks.sjsu.edu/spartandaily/10277/thumbnail.jp

    Spartan Daily, April 25, 1977

    Get PDF
    Volume 68, Issue 51https://scholarworks.sjsu.edu/spartandaily/6205/thumbnail.jp

    English. Навчальний посібник з англійської мови для студентів І-ІІ курсів спеціальності «Автомобілі і автомобільне господарство»

    Get PDF
    Part I… Lesson 1. Essential parts of an automobile… 5-- Unit 2. Types of Waves…8 -- Unit 3. Speed of Waves… 10-- Unit 4. Interactions of Waves…13-- Unit 5. Electromagnetic Waves…16-- Unit 6. Type of Waves…19-- Part II…22-- Unit 1. Infrared Rays… 22-- Unit 2. Visible Light…25-- Unit 3. Wave or Particle?... 29-- Unit 4. Reflection of Light…31-- Unit 5. Reflection and Mirrors…34-- Unit 6. Refraction of Light…37-- Unit 7. Optical Instruments…40-- Unit 8. Lasers…43-- Unit 9. Fiber Optics… 47-- Part III… 52-- Unit 1. A Halogen Lamp…52-- Unit 2. LED Lamp…54-- Unit 3. Electroluminescent Wire… 57-- Unit 4. Black Light… 59-- Unit 5. Compact Fluorescent Lamp (CFL)… 62-- Unit 6. Plasma Lamps. …65-- Unit 7. Architectural Lighting Design…68-- Part IV…70-- Additional reading… 70-

    Spartan Daily, April 25, 1977

    Get PDF
    Volume 68, Issue 51https://scholarworks.sjsu.edu/spartandaily/6205/thumbnail.jp

    Layout Sequence Prediction From Noisy Mobile Modality

    Full text link
    Trajectory prediction plays a vital role in understanding pedestrian movement for applications such as autonomous driving and robotics. Current trajectory prediction models depend on long, complete, and accurately observed sequences from visual modalities. Nevertheless, real-world situations often involve obstructed cameras, missed objects, or objects out of sight due to environmental factors, leading to incomplete or noisy trajectories. To overcome these limitations, we propose LTrajDiff, a novel approach that treats objects obstructed or out of sight as equally important as those with fully visible trajectories. LTrajDiff utilizes sensor data from mobile phones to surmount out-of-sight constraints, albeit introducing new challenges such as modality fusion, noisy data, and the absence of spatial layout and object size information. We employ a denoising diffusion model to predict precise layout sequences from noisy mobile data using a coarse-to-fine diffusion strategy, incorporating the RMS, Siamese Masked Encoding Module, and MFM. Our model predicts layout sequences by implicitly inferring object size and projection status from a single reference timestamp or significantly obstructed sequences. Achieving SOTA results in randomly obstructed experiments and extremely short input experiments, our model illustrates the effectiveness of leveraging noisy mobile data. In summary, our approach offers a promising solution to the challenges faced by layout sequence and trajectory prediction models in real-world settings, paving the way for utilizing sensor data from mobile phones to accurately predict pedestrian bounding box trajectories. To the best of our knowledge, this is the first work that addresses severely obstructed and extremely short layout sequences by combining vision with noisy mobile modality, making it the pioneering work in the field of layout sequence trajectory prediction.Comment: In Proceedings of the 31st ACM International Conference on Multimedia 2023 (MM 23
    corecore