5,223 research outputs found

    Personal computers and the liberating aspects for human creativity

    Get PDF
    This narrative inquiry study of selected adults from the UNCG faculty and staff focused on seeking out positive feelings which occur when this group of people use personal computers. A preliminary survey was mailed to identify participants who would be interviewed. The names had been gathered from C-TEP grant recipients, faculty, and staff on campus. Twenty-one adults ranging from 21 to 59 years of age were interviewed. Seven who were from different departments on campus were selected for description, including two females and five males. The interviews were transcribed and analyzed for recurring themes. The major themes which emerged were learning style, rising expectations, playfulness, liberation, and creativity. Each theme with the attained rich data is presented in narrative form

    Turbulence: Numerical Analysis, Modelling and Simulation

    Get PDF
    The problem of accurate and reliable simulation of turbulent flows is a central and intractable challenge that crosses disciplinary boundaries. As the needs for accuracy increase and the applications expand beyond flows where extensive data is available for calibration, the importance of a sound mathematical foundation that addresses the needs of practical computing increases. This Special Issue is directed at this crossroads of rigorous numerical analysis, the physics of turbulence and the practical needs of turbulent flow simulations. It seeks papers providing a broad understanding of the status of the problem considered and open problems that comprise further steps

    The use of data-mining for the automatic formation of tactics

    Get PDF
    This paper discusses the usse of data-mining for the automatic formation of tactics. It was presented at the Workshop on Computer-Supported Mathematical Theory Development held at IJCAR in 2004. The aim of this project is to evaluate the applicability of data-mining techniques to the automatic formation of tactics from large corpuses of proofs. We data-mine information from large proof corpuses to find commonly occurring patterns. These patterns are then evolved into tactics using genetic programming techniques

    VISION-BASED URBAN NAVIGATION PROCEDURES FOR VERBALLY INSTRUCTED ROBOTS

    Get PDF
    The work presented in this thesis is part of a project in instruction based learning (IBL) for mobile robots were a robot is designed that can be instructed by its users through unconstrained natural language. The robot uses vision guidance to follow route instructions in a miniature town model. The aim of the work presented here was to determine the functional vocabulary of the robot in the form of "primitive procedures". In contrast to previous work in the field of instructable robots this was done following a "user-centred" approach were the main concern was to create primitive procedures that can be directly associated with natural language instructions. To achieve this, a corpus of human-to-human natural language instructions was collected and analysed. A set of primitive actions was found with which the collected corpus could be represented. These primitive actions were then implemented as robot-executable procedures. Natural language instructions are under-specified when destined to be executed by a robot. This is because instructors omit information that they consider as "commonsense" and rely on the listener's sensory-motor capabilities to determine the details of the task execution. In this thesis the under-specification problem is solved by determining the missing information, either during the learning of new routes or during their execution by the robot. During learning, the missing information is determined by imitating the commonsense approach human listeners take to achieve the same purpose. During execution, missing information, such as the location of road layout features mentioned in route instructions, is determined from the robot's view by using image template matching. The original contribution of this thesis, in both these methods, lies in the fact that they are driven by the natural language examples found in the corpus collected for the IDL project. During the testing phase a high success rate of primitive calls, when these were considered individually, showed that the under-specification problem has overall been solved. A novel method for testing the primitive procedures, as part of complete route descriptions, is also proposed in this thesis. This was done by comparing the performance of human subjects when driving the robot, following route descriptions, with the performance of the robot when executing the same route descriptions. The results obtained from this comparison clearly indicated where errors occur from the time when a human speaker gives a route description to the time when the task is executed by a human listener or by the robot. Finally, a software speed controller is proposed in this thesis in order to control the wheel speeds of the robot used in this project. The controller employs PI (Proportional and Integral) and PID (Proportional, Integral and Differential) control and provides a good alternative to expensive hardware

    Safety-critical scenarios and virtual testing procedures for automated cars at road intersections

    Get PDF
    This thesis addresses the problem of road intersection safety with regard to a mixed population of automated vehicles and non-automated road users. The work derives and evaluates safety-critical scenarios at road junctions, which can pose a particular safety problem involving automated cars. A simulation and evaluation framework for car-to-car accidents is presented and demonstrated, which allows examining the safety performance of automated driving systems within those scenarios. Given the recent advancements in automated driving functions, one of the main challenges is safe and efficient operation in complex traffic situations such as road junctions. There is a need for comprehensive testing, either in virtual testing environments or on real-world test tracks. Since it is unrealistic to cover all possible combinations of traffic situations and environment conditions, the challenge is to find the key driving situations to be evaluated at junctions. Against this background, a novel method to derive critical pre-crash scenarios from historical car accident data is presented. It employs k-medoids to cluster historical junction crash data into distinct partitions and then applies the association rules algorithm to each cluster to specify the driving scenarios in more detail. The dataset used consists of 1,056 junction crashes in the UK, which were exported from the in-depth On-the-Spot database. The study resulted in thirteen crash clusters for T-junctions, and six crash clusters for crossroads. Association rules revealed common crash characteristics, which were the basis for the scenario descriptions. As a follow-up to the scenario generation, the thesis further presents a novel, modular framework to transfer the derived collision scenarios to a sub-microscopic traffic simulation environment. The software CarMaker is used with MATLAB/Simulink to simulate realistic models of vehicles, sensors and road environments and is combined with an advanced Monte Carlo method to obtain a representative set of parameter combinations. The analysis of different safety performance indicators computed from the simulation outputs reveals collision and near-miss probabilities for selected scenarios. The usefulness and applicability of the simulation and evaluation framework is demonstrated for a selected junction scenario, where the safety performance of different in-vehicle collision avoidance systems is studied. The results show that the number of collisions and conflicts were reduced to a tenth when adding a crossing and turning assistant to a basic forward collision avoidance system. Due to its modular architecture, the presented framework can be adapted to the individual needs of future users and may be enhanced with customised simulation models. Ultimately, the thesis leads to more efficient workflows when virtually testing automated driving at intersections, as a complement to field operational tests on public roads

    At the crossroads of big science, open science, and technology transfer

    Get PDF
    Les grans infraestructures científiques s’enfronten a demandes creixents de responsabilitat pública, no només per la seva contribució al descobriment científic, sinó també per la seva capacitat de generar valor econòmic secundari. Per construir i operar les seves infraestructures sofisticades, sovint generen tecnologies frontereres dissenyant i construint solucions tècniques per a problemes d’enginyeria complexos i sense precedents. En paral·lel, la dècada anterior ha presenciat la ràpida irrupció de canvis tecnològics que han afectat la manera com es fa i es comparteix la ciència, cosa que ha comportat l’emergència del concepte d’Open Science (OS). Els governs avancen ràpidament vers aquest paradigma de OS i demanen a les grans infraestructures científiques que "obrin" els seus processos científics. No obstant, aquestes dues forces s'oposen, ja que la comercialització de tecnologies i resultats científics requereixen normalment d’inversions financeres importants i les empreses només estan disposades a assumir aquest cost si poden protegir la innovació de la imitació o de la competència deslleial. Aquesta tesi doctoral té com a objectiu comprendre com les noves aplicacions de les TIC afecten els resultats de la recerca i la transferència de tecnologia resultant en el context de les grans infraestructures científiques. La tesis pretén descobrir les tensions entre aquests dos vectors normatius, així com identificar els mecanismes que s’utilitzen per superar-les. La tesis es compon de quatre estudis: 1) Un estudi que aplica un mètode de recerca mixt que combina dades de dues enquestes d’escala global realitzades online (2016, 2018), amb dos cas d’estudi de dues comunitats científiques en física d’alta energia i biologia molecular que avaluen els factors explicatius darrere les pràctiques de compartir dades per part dels científics; 2) Un estudi de cas d’Open Targets, una infraestructura d’informació basada en dades considerades bens comuns, on el Laboratori Europeu de Biologia Molecular-EBI i empreses farmacèutiques col·laboren i comparteixen dades científiques i eines tecnològiques per accelerar el descobriment de medicaments; 3) Un estudi d’un conjunt de dades únic de 170 projectes finançats en el marc d’ATTRACT (un nou instrument de la Comissió Europea liderat per les grans infraestructures científiques europees) que té com a objectiu comprendre la naturalesa del procés de serendipitat que hi ha darrere de la transició de tecnologies de grans infraestructures científiques a aplicacions comercials abans no anticipades. ; i 4) un cas d’estudi sobre la tecnologia White Rabbit, un hardware sofisticat de codi obert desenvolupat al Consell Europeu per a la Recerca Nuclear (CERN) en col·laboració amb un extens ecosistema d’empreses.Las grandes infraestructuras científicas se enfrentan a crecientes demandas de responsabilidad pública, no solo por su contribución al descubrimiento científico sino también por su capacidad de generar valor económico para la sociedad. Para construir y operar sus sofisticadas infraestructuras, a menudo generan tecnologías de vanguardia al diseñar y construir soluciones técnicas para problemas de ingeniería complejos y sin precedentes. Paralelamente, la década anterior ha visto la irrupción de rápidos cambios tecnológicos que afectan la forma en que se genera y comparte la ciencia, lo que ha llevado a acuñar el concepto de Open Science (OS). Los gobiernos se están moviendo rápidamente hacia este nuevo paradigma y están pidiendo a las grandes infraestructuras científicas que "abran" el proceso científico. Sin embargo, estas dos fuerzas se oponen, ya que la comercialización de tecnología y productos científicos generalmente requiere importantes inversiones financieras y las empresas están dispuestas a asumir este coste solo si pueden proteger la innovación de la imitación o la competencia desleal. Esta tesis doctoral tiene como objetivo comprender cómo las nuevas aplicaciones de las TIC están afectando los resultados científicos y la transferencia de tecnología resultante en el contexto de las grandes infraestructuras científicas. La tesis pretende descubrir las tensiones entre estas dos fuerzas normativas e identificar los mecanismos que se emplean para superarlas. La tesis se compone de cuatro estudios: 1) Un estudio que emplea un método mixto de investigación que combina datos de dos encuestas de escala global realizadas online (2016, 2018), con dos caso de estudio sobre dos comunidades científicas distintas -física de alta energía y biología molecular- que evalúan los factores explicativos detrás de las prácticas de intercambio de datos científicos; 2) Un caso de estudio sobre Open Targets, una infraestructura de información basada en datos considerados como bienes comunes, donde el Laboratorio Europeo de Biología Molecular-EBI y compañías farmacéuticas colaboran y comparten datos científicos y herramientas tecnológicas para acelerar el descubrimiento de fármacos; 3) Un estudio de un conjunto de datos único de 170 proyectos financiados bajo ATTRACT, un nuevo instrumento de la Comisión Europea liderado por grandes infraestructuras científicas europeas, que tiene como objetivo comprender la naturaleza del proceso fortuito detrás de la transición de las tecnologías de grandes infraestructuras científicas a aplicaciones comerciales previamente no anticipadas ; y 4) un estudio de caso de la tecnología White Rabbit, un sofisticado hardware de código abierto desarrollado en el Consejo Europeo de Investigación Nuclear (CERN) en colaboración con un extenso ecosistema de empresas.Big science infrastructures are confronting increasing demands for public accountability, not only within scientific discovery but also their capacity to generate secondary economic value. To build and operate their sophisticated infrastructures, big science often generates frontier technologies by designing and building technical solutions to complex and unprecedented engineering problems. In parallel, the previous decade has seen the disruption of rapid technological changes impacting the way science is done and shared, which has led to the coining of the concept of Open Science (OS). Governments are quickly moving towards the OS paradigm and asking big science centres to "open up” the scientific process. Yet these two forces run in opposition as the commercialization of scientific outputs usually requires significant financial investments and companies are willing to bear this cost only if they can protect the innovation from imitation or unfair competition. This PhD dissertation aims at understanding how new applications of ICT are affecting primary research outcomes and the resultant technology transfer in the context of big and OS. It attempts to uncover the tensions in these two normative forces and identify the mechanisms that are employed to overcome them. The dissertation is comprised of four separate studies: 1) A mixed-method study combining two large-scale global online surveys to research scientists (2016, 2018), with two case studies in high energy physics and molecular biology scientific communities that assess explanatory factors behind scientific data-sharing practices; 2) A case study of Open Targets, an information infrastructure based upon data commons, where European Molecular Biology Laboratory-EBI and pharmaceutical companies collaborate and share scientific data and technological tools to accelerate drug discovery; 3) A study of a unique dataset of 170 projects funded under ATTRACT -a novel policy instrument of the European Commission lead by European big science infrastructures- which aims to understand the nature of the serendipitous process behind transitioning big science technologies to previously unanticipated commercial applications; and 4) a case study of White Rabbit technology, a sophisticated open-source hardware developed at the European Council for Nuclear Research (CERN) in collaboration with an extensive ecosystem of companies

    Military veterans and college success: a qualitative examination of veteran needs in higher education

    Get PDF
    Military veterans are a rapidly growing population of non-traditional students in the United States. The new Post-9/11 G.I. Bill effective in 2009 has made it easier for military veterans to fund higher education costs upon discharge from the military. Traditional four-year colleges and universities are well suited to serving students who have recently finished high school. However, are they properly prepared to serve military veterans? Military veterans bring with them a host of personal issues and needs beyond educational funding which may tax the capacity of student services professionals, faculty, and campus architecture. Veterans, like many non-traditional students, encounter barriers to success which are not present for the majority of traditional college students. This qualitative study conducted as an analysis of personal interviews with 13 Post-9/11 G.I. Bill veterans reveals, through examination of theoretical and heuristic knowledge, a multitude of individual and collective needs in college. Military veterans at the research institution seek anonymity on campus, treatment as adults, a veteran's center for transition assistance and camaraderie and administrative help, better marketing of available services, college credit for military training and experience, and a stake in guidance of their future in college. The interviews also reveal that even with the perceived lack of solutions for the above listed needs, a knowledgeable and compassionate veteran's liaison on campus may make college a successful venture for military veterans

    How much of driving is pre-attentive?

    Get PDF
    Driving a car in an urban setting is an extremely difficult problem, incorporating a large number of complex visual tasks; however, this problem is solved daily by most adults with little apparent effort. This paper proposes a novel vision-based approach to autonomous driving that can predict and even anticipate a driver's behavior in real time, using preattentive vision only. Experiments on three large datasets totaling over 200 000 frames show that our preattentive model can (1) detect a wide range of driving-critical context such as crossroads, city center, and road type; however, more surprisingly, it can (2) detect the driver's actions (over 80% of braking and turning actions) and (3) estimate the driver's steering angle accurately. Additionally, our model is consistent with human data: First, the best steering prediction is obtained for a perception to action delay consistent with psychological experiments. Importantly, this prediction can be made before the driver's action. Second, the regions of the visual field used by the computational model strongly correlate with the driver's gaze locations, significantly outperforming many saliency measures and comparable to state-of-the-art approaches.European Commission’s Seventh Framework Programme (FP7/2007-2013
    corecore