5 research outputs found

    Pedestrians as floating life - on the reinvention of the pedestrian city

    No full text
    Walking with its average speed of 5 km/h was for a very long period the primary mode of moving and engaging with the immediate material environment for humans. However, over the past half-century, the socio-technical systems of automobility as well as other forms of non-human powered mobility have changed the ways in which cities are experienced. Most recently, however, the pedestrian mode has been reprioritised resulting in a shift of emphasis, particularly in European cities, toward recognising the destructive forces of automobility. This shift has been accompanied by a variety of pedestrian reprioritisation strategies including the pedestrianisation of city streets as well as restricted vehicular access to particular inner city zones at prescribed times. The challenge for many cities is how to legitimately change mindsets, from automobility to walking. This paper explores the reprioritisation of urban walking not as ‘infrastructure’ or an ‘intervention’ but as transitory, ‘floating life’ across space and time. We conceptualise walking as a multi-sensorial, effective, and mobile engagement with the material environment. In doing so, we ask how the ‘floating life’ of pedestrianism may be reflected upon as part of the so-called ‘mobilities turn’ and in particular how theories of materiality, embodiment, design and experience interlink with walking. In this paper walking as a pedestrian is therefore a particular quality of mobility. The way in which we ‘inhabit’ the city is significant when we walk, and turning to walking as ‘floating life’ pays attention to this underemphasised ontological dimension

    A user-specific Machine Learning approach for improving touch accuracy on mobile devices

    No full text
    We present a flexible Machine Learning approach for learn- ing user-specific touch input models to increase touch ac- curacy on mobile devices. The model is based on flexible, non-parametric Gaussian Process regression and is learned using recorded touch inputs. We demonstrate that signifi- cant touch accuracy improvements can be obtained when ei- ther raw sensor data is used as an input or when the device’s reported touch location is used as an input, with the latter marginally outperforming the former. We show that learned offset functions are highly nonlinear and user-specific and that user-specific models outperform models trained on data pooled from several users. Crucially, significant performance improvements can be obtained with a small (≈ 200) num- ber of training examples, easily obtained for a particular user through a calibration game or from keyboard entry data
    corecore