425 research outputs found

    Automobile Races and the Marketing of Places: A Geographic and Marketing Exploration of IndyCar Racing in the United States

    Get PDF
    IndyCar events attract thousands of spectators and over one million television viewers. Additionally, IndyCar is the most elite form of motorsport that races on oval speedways, natural terrain road courses, and temporary street circuits. This research utilizes case studies of IndyCar events contested on each of these three venue types (Iowa Corn Indy 250 – oval speedway; Indy 200 at Mid-Ohio – road course; Grand Prix of St. Petersburg – street circuit). Previous research in figurational sociology, place marketing, and mega-events provide a framework used to identify key similarities and differences among the perceived and observed benefits and costs of an IndyCar race on their host cities and regions. Identification and analysis of key local event stakeholders and sponsors from a content analysis of event souvenir programs, television broadcasts, and local newspaper coverage revealed key differences among the three case study events. Street circuit races rely on a high-level of public support, have a high impact on businesses and residents surrounding the venue, and can showcase a city’s downtown amenities via television exposure of city streets during most of the event. In the case of St. Petersburg, the festival atmosphere and high speed of IndyCar racing in their downtown streets has been part of a process of re-inventing the city as it sheds an image of a quiet city with mostly older residents and has been successful attracting both visitors and residents to downtown. Oval speedway events rely on high participation of private, local event sponsors that are marketing their good or service mostly to local race fans who, for the most part, stay only at the speedway on race day. In particular, the Iowa Corn Indy 250 provides a platform for local, corn-based ethanol promotion of their product in high-performance race cars. Road course races attract a greater number of weekend-long, on-site camping motorsport enthusiasts and participants as these events are more a celebration of the automobile industry, and in particular, the Honda assembly plants that employ thousands of nearby Ohio residents. The results from this research provide key lessons for other current and potential IndyCar venues across three different venue types

    1997-1999 Otterbein College Bulletin

    Get PDF
    Volume 87https://digitalcommons.otterbein.edu/coursecatalogs/1034/thumbnail.jp

    Multimodal Video Analysis and Modeling

    Get PDF
    From recalling long forgotten experiences based on a familiar scent or on a piece of music, to lip reading aided conversation in noisy environments or travel sickness caused by mismatch of the signals from vision and the vestibular system, the human perception manifests countless examples of subtle and effortless joint adoption of the multiple senses provided to us by evolution. Emulating such multisensory (or multimodal, i.e., comprising multiple types of input modes or modalities) processing computationally offers tools for more effective, efficient, or robust accomplishment of many multimedia tasks using evidence from the multiple input modalities. Information from the modalities can also be analyzed for patterns and connections across them, opening up interesting applications not feasible with a single modality, such as prediction of some aspects of one modality based on another. In this dissertation, multimodal analysis techniques are applied to selected video tasks with accompanying modalities. More specifically, all the tasks involve some type of analysis of videos recorded by non-professional videographers using mobile devices.Fusion of information from multiple modalities is applied to recording environment classification from video and audio as well as to sport type classification from a set of multi-device videos, corresponding audio, and recording device motion sensor data. The environment classification combines support vector machine (SVM) classifiers trained on various global visual low-level features with audio event histogram based environment classification using k nearest neighbors (k-NN). Rule-based fusion schemes with genetic algorithm (GA)-optimized modality weights are compared to training a SVM classifier to perform the multimodal fusion. A comprehensive selection of fusion strategies is compared for the task of classifying the sport type of a set of recordings from a common event. These include fusion prior to, simultaneously with, and after classification; various approaches for using modality quality estimates; and fusing soft confidence scores as well as crisp single-class predictions. Additionally, different strategies are examined for aggregating the decisions of single videos to a collective prediction from the set of videos recorded concurrently with multiple devices. In both tasks multimodal analysis shows clear advantage over separate classification of the modalities.Another part of the work investigates cross-modal pattern analysis and audio-based video editing. This study examines the feasibility of automatically timing shot cuts of multi-camera concert recordings according to music-related cutting patterns learnt from professional concert videos. Cut timing is a crucial part of automated creation of multicamera mashups, where shots from multiple recording devices from a common event are alternated with the aim at mimicing a professionally produced video. In the framework, separate statistical models are formed for typical patterns of beat-quantized cuts in short segments, differences in beats between consecutive cuts, and relative deviation of cuts from exact beat times. Based on music meter and audio change point analysis of a new recording, the models can be used for synthesizing cut times. In a user study the proposed framework clearly outperforms a baseline automatic method with comparably advanced audio analysis and wins 48.2 % of comparisons against hand-edited videos

    General Course Catalog [2012/14]

    Get PDF
    Undergraduate Course Catalog, 2012/14https://repository.stcloudstate.edu/undergencat/1119/thumbnail.jp

    General Course Catalog [January-June 2015]

    Get PDF
    Undergraduate Course Catalog, January-June 2015https://repository.stcloudstate.edu/undergencat/1121/thumbnail.jp

    Undergraduate and graduate catalog [1995-1996]

    Get PDF

    Undergraduate Catalogue 2002-2003

    Get PDF
    https://scholarship.shu.edu/undergraduate_catalogues/1007/thumbnail.jp

    General Course Catalog [January-June 2016]

    Get PDF
    Undergraduate Course Catalog, January-June 2016https://repository.stcloudstate.edu/undergencat/1123/thumbnail.jp
    corecore