102 research outputs found

    Economic risk evaluation in urban flooding and instability-prone areas: The case study of san giovanni rotondo (southern Italy)

    No full text
    Estimating economic losses caused on buildings and other civil engineering works due to flooding events is often a difficult task. The accuracy of the estimate is affected by the availability of detailed data regarding the return period of the flooding event, vulnerability of exposed assets, and type of economy run in the affected area. This paper aims to provide a quantitative methodology for the assessment of economic losses associated with flood scenarios. The proposed methodology was performed for an urban area in Southern Italy prone to hydrogeological instabilities. At first, the main physical characteristics of the area such as rainfall, land use, permeability, roughness, and slopes of the area under investigation were estimated in order to obtain input for flooding simulations. Afterwards, the analysis focused on the spatial variability incidence of the rainfall parameters in flood events. The hydraulic modeling provided different flood hazard scenarios. The risk curve obtained by plotting economic consequences vs. the return period for each hazard scenario can be a useful tool for local authorities to identify adequate risk mitigation measures and therefore prioritize the economic resources necessary for the implementation of such mitigation measures

    A multi-modal approach to creating routines for smart speakers

    No full text
    Smart speakers can execute user-defined routines, namely, sequences of actions triggered by specific events or conditions. This paper presents a new approach to the creation of routines, which leverages the multi-modal features (vision, speech, and touch) offered by Amazon Alexa running on Echo Show devices. It then illustrates how end users found easier to create routines with the proposed approach than with the usual interaction with the Alexa app

    Creating Routines for IoT Ecosystems through Conversation with Smart Speakers

    No full text
    Nowadays, end users can create routines for Amazon Echo and Google Nest devices using a companion app (Amazon Alexa and Google Home, respectively) running on smartphones. Our work explores the possibility of transferring this End-User Development activity directly to the smart speakers, with and without a touchscreen. To this aim, we designed and developed two Amazon Skills (one for Amazon Echo Show and the other for Amazon Echo Dot) and two Google Actions (one for Google Nest Hub and the other for Google Home Speaker). Then, we carried out two controlled experiments, involving 40 participants, to compare routine creation through multi-modal interaction (based on vision, speech, and touch) with routine creation through speech-only interaction. Driven by our research questions, we found that for routine creation the multi-modal interaction is preferred to the speech-only one and the perceived quality of interaction seems to depend on the brand of the smart speaker
    corecore