117 research outputs found
Gentrification Without the Negative: A Rhetorical Analysis of the Franklinton Neighborhood
Numerous disciplines have engaged gentrification, a phenomenon whereby urban neighborhoods experience dramatic changes to their social fabric, since sociologist Ruth Glass introduced the term in 1964. Whether contesting the realities of displacement or how institutional or market forces drive the phenomenon, little scholarly consensus has been reached over the topic. By studying the ground-level gentrification at work in Franklinton, a neighborhood in Ohio's capitol city, I mobilize rhetorical theory to analyze how gentrification discourses affectively negotiate its perceived damages and benefits. My analysis focuses on planning documents published by the Columbus Department of Development from 1992 to 2014, as well as local news and media sources. I conclude that Franklinton's gentrification embodies how nonfixed, temporally situated rhetoric effects urban policy at the local level.No embargoAcademic Major: Englis
Virtual Reality Games for Motor Rehabilitation
This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
Intelligent system for interaction with virtual characters based on volumetric sensors
Dissertação de Mestrado, Engenharia Elétrica e Eletrónica, Instituto Superior de Engenharia, Universidade do Algarve, 2015A tecnologia vem sendo desenvolvida para ajudar-nos a completar ou aumentar a
produtividade nas nossas tarefas diárias. Muitas das máquinas construídas têm sido
progressivamente aperfeiçoadas para funcionar mais como um ser humano, usando
para isso os mais variados sensores. Um dos problemas mais desafiantes que a tecnologia
encontrou é como dar a uma máquina a capacidade que um "animal" tem de
perceber o mundo através do seu sistema visual. Uma solução será usar na máquina
sistemas inteligentes que usem visão computacional. Uma grande ajuda pode chegar
da perceção de profundidade pela máquina, tornando menos complexa a deteção e
a compreensão de objetos numa imagem por parte desta. Com o aparecimento de
sensores volumétricos (tridimensional 3D) no mercado consumidor, aumentaram os
desenvolvimentos feitos nesta área científica, permitindo assim a sua integração na
maioria dos dispositivos, tais como computadores ou dispositivos móveis, a um preço
muito competitivo. Os sensores volumétricos podem ser usados nas mais variadas
áreas pois apesar de terem aparecido inicialmente na área dos videojogos, estendemse
ainda à área de vídeo, modelação 3D, interfaces, jogos ou realidade virtual e aumentada.
Esta dissertação foca essencialmente no desenvolvimento de sistemas (inteligentes)
baseados em sensores volumétricos (neste caso a Microsoft Kinect) para a interação
com avatares ou filmes. Quanto a aplicações na área de vídeo, foi desenvolvida uma solução onde um sensor 3D ajuda um utilizador a seguir uma narrativa que é iniciada
assim que o utilizador é detetado, mudando os acontecimentos do vídeo consoante
ações pré-determinadas do utilizador. O utilizador pode então mudar o rumo
da história mudando de posição ou efetuando um gesto. Esta solução é ilustrada utilizando
retroprojeção, existindo ainda a possibilidade de ser apresentada em modo
holograma numa abordagem à escala.
O descrito no anterior parágrafo pode também ser aplicada a uma solução de vertente
mais comercial. Para isso, foi desenvolvido uma aplicação altamente configurável,
podendo-se ajustar (em termos visuais) às necessidades de diferentes companhias.
O ambiente gráfico é acompanhado por um avatar ou por um video (previamente
gravado), que interage com um utilizador através de gestos, dando uma sensação
mais realista devido à utilização de holografia. Ao interagir com a instalação,
são registados todos os movimentos e interações efetuadas pelo utilizador para que
estatísticas sejam construídas, de maneira a perceber os conteúdos com mais interesse
bem como as áreas físicas com mais interação. Adicionalmente, o utilizador poderá ter
a sua fotografia completa ou tipo BI extraída, podendo-lhe ser oferecidos em produtos
promocionais da empresa. Devido à curta área de interação oferecida por um sensor
deste tipo (Kinect), foi também desenvolvida a possibilidade de juntar vários sensores,
4 para cobrir 180º (graus) em frente da instalação ou ainda 8 para cobrir os 360º à volta
da instalação, de maneira a que os utilizadores possam ser detetados por qualquer um
deles e que não sejam perdidos quando atravessam para uma zona de outro sensor, ou
mesmo quando saem do campo de visão dos sensores e retornam mais tarde.
Apesar dos sensores referidos serem mais conhecidos na interação com um jogo
virtual, jogos reais e físicos também podem beneficiar deste tipo de sensor. Neste último
ponto, é apresentada uma ferramenta de realidade aumentada para snooker ou
bilhar. Nesta aplicação, um sensor 3D colocado por cima da mesa, capta a área de jogo
sendo depois processada para que sejam detetadas as bolas, o taco e as tabelas. Sempre
que possível, esta deteção é feita usando a terceira dimensão (profundidade) oferecida por estes sensores, tornando-se por exemplo mais robusto a mudanças quanto a
condições luminosas. Com estes dados é então previsto, utilizando álgebra vetorial, a
trajetória da bola, sendo projetado o resultado na mesa
Eye quietness and quiet eye in expert and novice golf performance: an electrooculographic analysis
Quiet eye (QE) is the final ocular fixation on the target of an action (e.g., the ball in golf putting). Camerabased eye-tracking studies have consistently found longer QE durations in experts than novices; however, mechanisms underlying QE are not known. To offer a new perspective we examined the feasibility of measuring the QE using electrooculography (EOG) and developed an index to assess ocular activity across time: eye quietness (EQ). Ten expert and ten novice golfers putted 60 balls to a 2.4 m distant hole. Horizontal EOG (2ms resolution) was recorded from two electrodes placed on the outer sides of the eyes. QE duration was measured using a EOG voltage threshold and comprised the sum of the pre-movement and post-movement initiation components. EQ was computed as the standard deviation of the EOG in 0.5 s bins from –4 to +2 s, relative to backswing initiation: lower values indicate less movement of the eyes, hence greater quietness. Finally, we measured club-ball address and swing durations. T-tests showed that total QE did not differ between groups (p = .31); however, experts had marginally shorter pre-movement QE (p = .08) and longer post-movement QE (p < .001) than novices. A group × time ANOVA revealed that experts had less EQ before
backswing initiation and greater EQ after backswing initiation (p = .002). QE durations were inversely correlated with EQ from –1.5 to 1 s (rs = –.48 - –.90, ps = .03 - .001). Experts had longer swing durations than novices (p = .01) and, importantly, swing durations correlated positively with post-movement QE (r = .52, p = .02) and negatively with EQ from 0.5 to 1s (r = –.63, p = .003). This study demonstrates the feasibility of measuring ocular activity using EOG and validates EQ as an index of ocular activity. Its findings challenge the dominant perspective on QE and provide new evidence that expert-novice differences in ocular activity may reflect differences in the kinematics of how experts and novices execute skills
A technique for determining viable military logistics support alternatives
A look at today's US military will see them operating much beyond the scope of protecting and defending the United States. These operations now consist of, but are not limited to humanitarian aid, disaster relief, and conflict resolution. This broad spectrum of operational environments has necessitated a transformation of the individual military services into a hybrid force that can leverage the inherent and emerging capabilities from the strengths of those under the umbrella of the Department of Defense (DOD), this concept has been coined Joint Operations.
Supporting Joint Operations requires a new approach to determining a viable military logistics support system. The logistics architecture for these operations has to accommodate scale, time, varied mission objectives, and imperfect information. Compounding the problem is the human in the loop (HITL) decision maker (DM) who is a necessary component for quickly assessing and planning logistics support activities. Past outcomes are not necessarily good indicators of future results, but they can provide a reasonable starting point for planning and prediction of specific needs for future requirements.
Adequately forecasting the necessary logistical support structure and commodities needed for any resource intensive environment has progressed well beyond stable demand assumptions to one in which dynamic and nonlinear environments can be captured with some degree of fidelity and accuracy. While these advances are important, a holistic approach that allows exploration of the operational environment or design space does not exist to guide the military logistician in a methodical way to support military forecasting activities. To bridge this capability gap, a method called A Technique for Logistics Architecture Selection (ATLAS) has been developed.
This thesis describes and applies the ATLAS method to a notional military scenario that involves the Navy concept of Seabasing and the Marine Corps concept of Distributed Operations applied to a platoon sized element. This work uses modeling and simulation to incorporate expert opinion and knowledge of military operations, dynamic reasoning methods, and certainty analysis to create a decisions support system (DSS) that can be used to provide the DM an enhanced view of the logistics environment and variables that impact specific measures of effectiveness.Ph.D.Committee Chair: Mavris, Dimitri; Committee Member: Fahringer, Philip; Committee Member: Nixon, Janel; Committee Member: Schrage, Daniel; Committee Member: Soban, Danielle; Committee Member: Vachtsevanos, Georg
A cognitive ego-vision system for interactive assistance
With increasing computational power and decreasing size, computers nowadays are already wearable and mobile. They become attendant of peoples' everyday life. Personal digital assistants and mobile phones equipped with adequate software gain a lot of interest in public, although the functionality they provide in terms of assistance is little more than a mobile databases for appointments, addresses, to-do lists and photos. Compared to the assistance a human can provide, such systems are hardly to call real assistants. The motivation to construct more human-like assistance systems that develop a certain level of cognitive capabilities leads to the exploration of two central paradigms in this work. The first paradigm is termed cognitive vision systems. Such systems take human cognition as a design principle of underlying concepts and develop learning and adaptation capabilities to be more flexible in their application. They are embodied, active, and situated. Second, the ego-vision paradigm is introduced as a very tight interaction scheme between a user and a computer system that especially eases close collaboration and assistance between these two. Ego-vision systems (EVS) take a user's (visual) perspective and integrate the human in the system's processing loop by means of a shared perception and augmented reality. EVSs adopt techniques of cognitive vision to identify objects, interpret actions, and understand the user's visual perception. And they articulate their knowledge and interpretation by means of augmentations of the user's own view. These two paradigms are studied as rather general concepts, but always with the goal in mind to realize more flexible assistance systems that closely collaborate with its users. This work provides three major contributions. First, a definition and explanation of ego-vision as a novel paradigm is given. Benefits and challenges of this paradigm are discussed as well. Second, a configuration of different approaches that permit an ego-vision system to perceive its environment and its user is presented in terms of object and action recognition, head gesture recognition, and mosaicing. These account for the specific challenges identified for ego-vision systems, whose perception capabilities are based on wearable sensors only. Finally, a visual active memory (VAM) is introduced as a flexible conceptual architecture for cognitive vision systems in general, and for assistance systems in particular. It adopts principles of human cognition to develop a representation for information stored in this memory. So-called memory processes continuously analyze, modify, and extend the content of this VAM. The functionality of the integrated system emerges from their coordinated interplay of these memory processes. An integrated assistance system applying the approaches and concepts outlined before is implemented on the basis of the visual active memory. The system architecture is discussed and some exemplary processing paths in this system are presented and discussed. It assists users in object manipulation tasks and has reached a maturity level that allows to conduct user studies. Quantitative results of different integrated memory processes are as well presented as an assessment of the interactive system by means of these user studies
State Antifragility: An Agent-Based Modeling Approach to Understanding State Behavior
This dissertation takes an interdisciplinary approach to understanding what makes states antifragile and why this matters by constructing a parsimonious, first of its kind agent-based model. The model focuses on the key elements of state antifragility that reside along a spectrum of fragility and transverse bidirectionally from fragile to resilient to antifragile given a certain set of environmental conditions.
First coined by Nicholas Nassim Taleb and applied to economics, antifragility is a nascent concept. In 2015, Nassim Taleb and Gregory Treverton’s article in Foreign Affairs outlined five characteristics of state antifragility. This project aims to advance the study of anti-fragility in the context of the nation-state beyond these initial contributions by (1) development of three propensity variables associated with antifragility, (2) a new agent-based model to investigate antifragility, and (3) applying the findings of the model and the propensity score theorizing to two case studies.
This research posits three propensity variables for a state to become fragile, resilient or antifragile. These variables include learning, power conversion, and agility. Cumulatively, these variables comprise a state’s capacity for dealing with various stressors in the international environment. The agent-based model in this dissertation captures the behavior of a single state when confronted with a stress in a variety of scenarios, forming an essential building block for future work (hinted at in the case studies) involving the interaction between states. The case studies show how the propensity variables, and the model results provide the basis for a distinctive and relatively novel evaluation of the historical record involving the history of the United States in and with Iraq, and the evolving great power rivalry between the United States and China, emphasizing the value of taking antifragility seriously in the context of International Studies
- …