547 research outputs found

    Natural and Technological Hazards in Urban Areas

    Get PDF
    Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events

    Scalable Exploration of Complex Objects and Environments Beyond Plain Visual Replication​

    Get PDF
    Digital multimedia content and presentation means are rapidly increasing their sophistication and are now capable of describing detailed representations of the physical world. 3D exploration experiences allow people to appreciate, understand and interact with intrinsically virtual objects. Communicating information on objects requires the ability to explore them under different angles, as well as to mix highly photorealistic or illustrative presentations of the object themselves with additional data that provides additional insights on these objects, typically represented in the form of annotations. Effectively providing these capabilities requires the solution of important problems in visualization and user interaction. In this thesis, I studied these problems in the cultural heritage-computing-domain, focusing on the very common and important special case of mostly planar, but visually, geometrically, and semantically rich objects. These could be generally roughly flat objects with a standard frontal viewing direction (e.g., paintings, inscriptions, bas-reliefs), as well as visualizations of fully 3D objects from a particular point of views (e.g., canonical views of buildings or statues). Selecting a precise application domain and a specific presentation mode allowed me to concentrate on the well defined use-case of the exploration of annotated relightable stratigraphic models (in particular, for local and remote museum presentation). My main results and contributions to the state of the art have been a novel technique for interactively controlling visualization lenses while automatically maintaining good focus-and-context parameters, a novel approach for avoiding clutter in an annotated model and for guiding users towards interesting areas, and a method for structuring audio-visual object annotations into a graph and for using that graph to improve guidance and support storytelling and automated tours. We demonstrated the effectiveness and potential of our techniques by performing interactive exploration sessions on various screen sizes and types ranging from desktop devices to large-screen displays for a walk-up-and-use museum installation. KEYWORDS - Computer Graphics, Human-Computer Interaction, Interactive Lenses, Focus-and-Context, Annotated Models, Cultural Heritage Computing

    Expanding the User Interactions and Design Process of Haptic Experiences in Virtual Reality

    Get PDF
    Virtual reality can be a highly immersive experience due to its realistic visual presentation. This immersive state is useful for applications including education, training, and entertainment. To enhance the state of immersion provided by virtual reality further, devices capable of simulating touch and force have been researched to allow not only a visual and audio experience but a haptic experience as well. Such research has investigated many approaches to generating haptics for virtual reality but often does not explore how to create an immersive haptic experience using them. In this thesis, we present a discussion on four proposed areas of the virtual reality haptic experience design process using a demonstration methodology. To investigate the application of haptic devices, we designed a modular ungrounded haptic system which was used to create a general-purpose device capable of force-based feedback and used it in the three topics of exploration. The first area explored is the application of existing haptic theory for aircraft control to the field of virtual reality drone control. The second area explored is the presence of the size-weight sensory illusion within virtual reality when using a simulated haptic force. The third area explored is how authoring within a virtual reality medium can be used by a designer to create VR haptic experiences. From these explorations, we begin a higher-level discussion of the broader process of creating a virtual reality haptic experience. Using the results of each project as a representation of our proposed design steps, we discuss not only the broader concepts the steps contribute to the process and their importance, but also draw connections between them. By doing this, we present a more holistic approach to the large-scale design of virtual reality haptic experiences and the benefits we believe it provides

    Insect neuroethology of reinforcement learning

    Get PDF
    Historically, reinforcement learning is a branch of machine learning founded on observations of how animals learn. This involved collaboration between the fields of biology and artificial intelligence that was beneficial to both fields, creating smarter artificial agents and improving the understanding of how biological systems function. The evolution of reinforcement learning during the past few years was rapid but substantially diverged from providing insights into how biological systems work, opening a gap between reinforcement learning and biology. In an attempt to close this gap, this thesis studied the insect neuroethology of reinforcement learning, that is, the neural circuits that underlie reinforcement-learning-related behaviours in insects. The goal was to extract a biologically plausible plasticity function from insect-neuronal data, use this to explain biological findings and compare it to more standard reinforcement learning models. Consequently, a novel dopaminergic plasticity rule was developed to approximate the function of dopamine as the plasticity mechanism between neurons in the insect brain. This allowed a range of observed learning phenomena to happen in parallel, like memory depression, potentiation, recovery, and saturation. In addition, by using anatomical data of connections between neurons in the mushroom body neuropils of the insect brain, the neural incentive circuit of dopaminergic and output neurons was also explored. This, together with the dopaminergic plasticity rule, allowed for dynamic collaboration amongst parallel memory functions, such as acquisition, transfer, and forgetting. When tested on olfactory conditioning paradigms, the model reproduced the observed changes in the activity of the identified neurons in fruit flies. It also replicated the observed behaviour of the animals and it allowed for flexible behavioural control. Inspired by the visual navigation system of desert ants, the model was further challenged in the visual place recognition task. Although a relatively simple encoding of the olfactory information was sufficient to explain odour learning, a more sophisticated encoding of the visual input was required to increase the separability among the visual inputs and enable visual place recognition. Signal whitening and sparse combinatorial encoding were sufficient to boost the performance of the system in this task. The incentive circuit enabled the encoding of increasing familiarity along a known route, which dropped proportionally to the distance of the animal from that route. Finally, the proposed model was challenged in delayed reinforcement tasks, suggesting that it might take the role of an adaptive critic in the context of reinforcement learning

    The mad manifesto

    Get PDF
    The “mad manifesto” project is a multidisciplinary mediated investigation into the circumstances by which mad (mentally ill, neurodivergent) or disabled (disclosed, undisclosed) students faced far more precarious circumstances with inadequate support models while attending North American universities during the pandemic teaching era (2020-2023). Using a combination of “emergency remote teaching” archival materials such as national student datasets, universal design for learning (UDL) training models, digital classroom teaching experiments, university budgetary releases, educational technology coursewares, and lived experience expertise, this dissertation carefully retells the story of “accessibility” as it transpired in disabling classroom containers trapped within intentionally underprepared crisis superstructures. Using rhetorical models derived from critical disability studies, mad studies, social work practice, and health humanities, it then suggests radically collaborative UDL teaching practices that may better pre-empt the dynamic needs of dis/abled students whose needs remain direly underserviced. The manifesto leaves the reader with discrete calls to action that foster more critical performances of intersectionally inclusive UDL classrooms for North American mad students, which it calls “mad-positive” facilitation techniques: 1. Seek to untie the bond that regards the digital divide and access as synonyms. 2. UDL practice requires an environment shift that prioritizes change potential. 3. Advocate against the usage of UDL as a for-all keystone of accessibility. 4. Refuse or reduce the use of technologies whose primary mandate is dataveillance. 5. Remind students and allies that university space is a non-neutral affective container. 6. Operationalize the tracking of student suicides on your home campus. 7. Seek out physical & affectual ways that your campus is harming social capital potential. 8. Revise policies and practices that are ability-adjacent imaginings of access. 9. Eliminate sanist and neuroscientific languaging from how you speak about students. 10. Vigilantly interrogate how “normal” and “belong” are socially constructed. 11. Treat lived experience expertise as a gift, not a resource to mine and to spend. 12. Create non-psychiatric routes of receiving accommodation requests in your classroom. 13. Seek out uncomfortable stories of mad exclusion and consider carceral logic’s role in it. 14. Center madness in inclusive methodologies designed to explicitly resist carceral logics. 15. Create counteraffectual classrooms that anticipate and interrupt kairotic spatial power. 16. Strive to refuse comfort and immediate intelligibility as mandatory classroom presences. 17. Create pathways that empower cozy space understandings of classroom practice. 18. Vector students wherever possible as dynamic ability constellations in assessment

    Lux junior 2023: 16. Internationales Forum fĂŒr den lichttechnischen Nachwuchs, 23. – 25. Juni 2023, Ilmenau : Tagungsband

    Get PDF
    WĂ€hrend des 16. Internationales Forums fĂŒr den lichttechnischen Nachwuchs prĂ€sentieren Studenten, Doktoranden und junge Absolventen ihre Forschungs- und Entwicklungsergebnisse aus allen Bereichen der Lichttechnik. Die Themen bewegen sich dabei von Beleuchtungsanwendungen in verschiedensten Bereichen ĂŒber Lichtmesstechnik, Kraftfahrzeugbeleuchung, LED-Anwendung bis zu nichtvisuellen Lichtwirkungen. Das Forum ist speziell fĂŒr Studierende und junge Absolventen des Lichtbereiches konzipiert. Es bietet neben den VortrĂ€gen und Postern die Möglichkeit zu Diskussionen und individuellem Austausch. In den 30 Jahren ihres Bestehens entwickelte sich die zweijĂ€hrig stattfindende Tagung zu eine Traditionsveranstaltung, die das Fachgebiet Lichttechnik der TU Ilmenau gemeinsam mit der Bezirksgruppe ThĂŒringen-Nordhessen der Deutschen Lichttechnischen Gesellschaft LiTG e. V. durchfĂŒhrt

    An Outlook into the Future of Egocentric Vision

    Full text link
    What will the future be? We wonder! In this survey, we explore the gap between current research in egocentric vision and the ever-anticipated future, where wearable computing, with outward facing cameras and digital overlays, is expected to be integrated in our every day lives. To understand this gap, the article starts by envisaging the future through character-based stories, showcasing through examples the limitations of current technology. We then provide a mapping between this future and previously defined research tasks. For each task, we survey its seminal works, current state-of-the-art methodologies and available datasets, then reflect on shortcomings that limit its applicability to future research. Note that this survey focuses on software models for egocentric vision, independent of any specific hardware. The paper concludes with recommendations for areas of immediate explorations so as to unlock our path to the future always-on, personalised and life-enhancing egocentric vision.Comment: We invite comments, suggestions and corrections here: https://openreview.net/forum?id=V3974SUk1

    Segment Anything

    Full text link
    We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.Comment: Project web-page: https://segment-anything.co
    • 

    corecore