5,853 research outputs found

    An Investigation of the Performance of Urban Rail Transit Systems on the Corridor Level: A Comparative Analysis in the American West

    Get PDF
    Since the 1980s, significant investments have been made in urban rail transit across the United States, particularly using light rail technology. Most of these light rail systems have been built in Sunbelt cities which no longer had legacy rail systems. As a result, they were constructed using a building blocks approach, being funded corridor by corridor. Most research, however, on urban rail performance has taken place at the system-wide level, leaving a significant gap at the level of the transit corridor. This research examined nineteen urban rail corridors in Denver, Salt Lake City, and Portland. A performance score was constructed for each corridor based upon ridership per mile, ridership growth, capital costs, and the cost of ongoing operations. These scores were then compared with a constructed profile of each corridor studied, which included aspects including but not limited to population and job density, median income, park and ride spaces, bus connections available, walkability, and headways between trains. Corridors in each city ranked high and low, with no city emerging as a clear frontrunner. Headways, population density, and percentage renter occupied housing units were found to have a statistically significant relationship with high corridor performance, largely in line with previous studies. Qualitative data gathered from this research suggest that partnerships with municipalities, communities, and businesses also played a crucial role in the development of successful urban rail corridors

    Automated Knowledge Generation with Persistent Surveillance Video

    Get PDF
    The Air Force has increasingly invested in persistent surveillance platforms gathering a large amount of surveillance video. Ordinarily, intelligence analysts watch the video to determine if suspicious activities are occurring. This approach to video analysis can be a very time and manpower intensive process. Instead, this thesis proposes that by using tracks generated from persistent video, we can build a model to detect events for an intelligence analyst. The event that we chose to detect was a suspicious surveillance activity known as a casing event. To test our model we used Global Positioning System (GPS) tracks generated from vehicles driving in an urban area. The results show that over 400 vehicles can be monitored simultaneously in real-time and casing events are detected with high probability (43 of 43 events detected with only 4 false positives). Casing event detections are augmented by determining which buildings are being targeted. In addition, persistent surveillance video is used to construct a social network from vehicle tracks based on the interactions of those tracks. Social networks that are constructed give us further information about the suspicious actors flagged by the casing event detector by telling us who the suspicious actor has interacted with and what buildings they have visited. The end result is a process that automatically generates information from persistent surveillance video providing additional knowledge and understanding to intelligence analysts about terrorist activities

    From Colonies to Client-States: The Origins of France's Postcolonial Relationship with Sub-Saharan Africa, 1940-1969

    Get PDF
    This dissertation examines the transformation of French mentalities regarding France's role in Africa, beginning with World War II and continuing through the end of Charles de Gaulle's presidency in 1969. Despite the political independence of France's African colonies in 1960, many of them quickly transitioned from colonies into client-states. Since then, France's relationships with its former colonies have enabled a variety of underhanded dealings on the continent. In tracing the roots of this transformation, I focus on French politicians and colonial administrators, and their gradual ideological shift away from traditional conceptions of the French colonial mission. I argue that the events of World War II, which split the empire and placed France in a greatly disadvantageous international position (first with respect to Nazi Germany and later vis-à-vis the Allies), led to a formidable shift in how France viewed its colonies and other Francophone territories in sub-Saharan Africa. French insecurity, precipitated by its fall as a major world power, required new ways to maintain influence internationally and in its empire. This mentality, while shaped by the postwar environment, was not the product of any one political ideology; it was shared by colonial administrators in both the Vichy and Free French regimes, and by politicians on both the left and right of the political spectrum after the war. At the same time, French officials grew increasingly wary of British and American efforts to broaden their respective standings in Africa. This renewed concern about the "Anglo-Saxon" threat, along with the increasing need to preserve influence in Africa in a postcolonial age, were powerful undercurrents in the formation of French policy on the continent leading up to and after decolonization. The result was increasingly cynical support of despotic regimes friendly to French interests, in an effort to maintain political influence in Africa after decolonization

    Position and Volume Estimation of Atmospheric Nuclear Detonations from Video Reconstruction

    Get PDF
    Recent work in digitizing films of foundational atmospheric nuclear detonations from the 1950s provides an opportunity to perform deeper analysis on these historical tests. This work leverages multi-view geometry and computer vision techniques to provide an automated means to perform three-dimensional analysis of the blasts for several points in time. The accomplishment of this requires careful alignment of the films in time, detection of features in the images, matching of features, and multi-view reconstruction. Sub-explosion features can be detected with a 67% hit rate and 22% false alarm rate. Hotspot features can be detected with a 71.95% hit rate, 86.03% precision and a 0.015% false positive rate. Detected hotspots are matched across 57-109o viewpoints with 76.63% average correct matching by defining their location relative to the center of the explosion, rotating them to the alternative viewpoint, and matching them collectively. When 3D reconstruction is applied to the hotspot matching it completes an automated process that has been used to create 168 3D point clouds with 31.6 points per reconstruction with each point having an accuracy of 0.62 meters with 0.35, 0.24, and 0.34 meters of accuracy in the x-, y- and z-direction respectively. As a demonstration of using the point clouds for analysis, volumes are estimated and shown to be consistent with radius-based models and in some cases improve on the level of uncertainty in the yield calculation

    GPU-Accelerated Point-Based Color Bleeding

    Get PDF
    Traditional global illumination lighting techniques like Radiosity and Monte Carlo sampling are computationally expensive. This has prompted the development of the Point-Based Color Bleeding (PBCB) algorithm by Pixar in order to approximate complex indirect illumination while meeting the demands of movie production; namely, reduced memory usage, surface shading independent run time, and faster renders than the aforementioned lighting techniques. The PBCB algorithm works by discretizing a scene’s directly illuminated geometry into a point cloud (surfel) representation. When computing the indirect illumination at a point, the surfels are rasterized onto cube faces surrounding that point, and the constituent pixels are combined into the final, approximate, indirect lighting value. In this thesis we present a performance enhancement to the Point-Based Color Bleeding algorithm through hardware acceleration; our contribution incorporates GPU-accelerated rasterization into the cube-face raster phase. The goal is to leverage the powerful rasterization capabilities of modern graphics processors in order to speed up the PBCB algorithm over standard software rasterization. Additionally, we contribute a preprocess that generates triangular surfels that are suited for fast rasterization by the GPU, and show that new heterogeneous architecture chips (e.g. Sandy Bridge from Intel) simplify the code required to leverage the power of the GPU. Our algorithm reproduces the output of the traditional Monte Carlo technique with a speedup of 41.65x, and additionally achieves a 3.12x speedup over software-rasterized PBCB

    Unification in the application and interpretation of regulations on economic performance

    Get PDF
    El objetivo del proyecto fue unificar la aplicabilidad e interpretación de la normatividad, dirigido a los entes de control que intervienen y administran recursos del estado, con el fin de garantizar la recuperación de estos dineros. Siendo necesario la injerencia de una entidad con nivel superior por presentarse diferencias entre la organización, la fiducia, los entes judiciales y los clientes sobre los conceptos legislativos en la aplicación de la liquidación y negación de prestaciones económicas. Para recobrar los dineros que asumen en primera instancia la organización debe acogerse en parte de las decisiones adoptadas por la fiducia, lo que genera cambios en la operación sin fundamentos de ley, presentándose insatisfacción en los clientes por la forma como se aplica la normatividad, donde la organización no goza de argumentos para respaldar las decisiones que fueron asumidas por políticas determinadas según la situación. Así mismo, es responsabilidad del ente superior hacer que las entidades que emiten órdenes judiciales en salud lo ejecuten de manera clara y correcta, evitando el mal manejo de dineros que son de carácter público, adicionalmente el sistema no permite la recuperación de los dineros que fueron pagados por cumplimiento de fallos de tutelas. Lo anterior obliga a realizar modificaciones en el proceso, por lo tanto se debe iniciar con la implementación de diferentes mecanismos de comunicación como capacitaciones y manuales dirigidos tanto a empleados como a aportantes con el propósito de actualizarlos sobre la normatividad y condiciones de los afiliados al Sistema General de Seguridad Social en Salud para obtener el reconocimiento de las prestaciones económicas. Así mismo por medio del área jurídica de la organización informar a los entes judiciales la normatividad vigente que se aplica en este sector.The objective of this project was to unify the applicability and interpretation of regulations aimed at controlling entities involved and manage state resources, in order to ensure recovery of these monies. Interference necessary being a top-level entity for differences between the organization, the trust, judicial authorities and clients on legislative concepts in the implementation of the settlement and denial of economic benefits arise. To recover the money they assume in the first instance must be upheld in the organization of the decisions taken by the trust, which leads to changes in the operation baseless law, presenting customer dissatisfaction with the way the regulation is applied, where the organization has no arguments to support the decisions that were taken by certain policies depending on the situation. It is also the responsibility of the entity making the top institutions issuing court orders health as clearly and correctly implemented, avoiding mismanagement of monies that are public, further the system does not allow the recovery of the monies that were paid in satisfaction of judgments of guardianship. This requires modifications to the process, so you should start with the implementation of different communication mechanisms such as training and manuals for both employees and contributors in order to update them on the regulations and conditions of the members of the System General Social Security Health for recognition of the economic benefits. Also through the legal department of the organization to inform judicial authorities current regulations that apply in this sector

    Evaluating Morphological Computation in Muscle and DC-motor Driven Models of Human Hopping

    Get PDF
    In the context of embodied artificial intelligence, morphological computation refers to processes which are conducted by the body (and environment) that otherwise would have to be performed by the brain. Exploiting environmental and morphological properties is an important feature of embodied systems. The main reason is that it allows to significantly reduce the controller complexity. An important aspect of morphological computation is that it cannot be assigned to an embodied system per se, but that it is, as we show, behavior- and state-dependent. In this work, we evaluate two different measures of morphological computation that can be applied in robotic systems and in computer simulations of biological movement. As an example, these measures were evaluated on muscle and DC-motor driven hopping models. We show that a state-dependent analysis of the hopping behaviors provides additional insights that cannot be gained from the averaged measures alone. This work includes algorithms and computer code for the measures.Comment: 10 pages, 4 figures, 1 table, 5 algorithm
    corecore