178 research outputs found

    Primate social cognition: uniquely primate, uniquely social, or just unique?

    Get PDF
    Primates undoubtedly have impressive abilities in perceiving, recognising, understanding and interpreting other individuals, their ranks and relationships; they learn rapidly in social situations, employ both deceptive and cooperative tactics to manipulate companions, and distinguish others’ knowledge from ignorance. Some evidence suggests that great apes recognize the cognitive basis of manipulative tactics and have a deeper appreciation of intention and cooperation than monkeys; and only great apes among primates show any understanding of the concept of self. None of these abilities is unique to primates, however. We distinguish (1) a package of quantitative advantages in social sophistication, evident in several broad mammalian taxa, in which neocortical enlargement is associated with social group size; from (2) a qualitative difference in understanding found in several distantly related but large-brained species, including great apes, some corvids, and perhaps elephants, dolphins, and domestic dogs. Convergence of similar abilities in widely divergent taxa should enable their cognitive basis and evolutionary origins to be determined. Cortical enlargement seems to have been evolutionarily selected by social challenges, although it confers intellectual benefits in other domains also; most likely the mechanism is more efficient memory. The taxonomic distribution of qualitatively special social skills does not point to an evolutionary origin in social challenges, and may be more closely linked to a need to acquire novel ways of dealing with the physical world; but at present research on this question remains in its infancy. In the case of great apes, their ability to learn new manual routines by parsing action components may also account for their qualitatively different social skills, suggesting that any strict partition of physical and social cognition is likely to be misleading

    Structure out of sound

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1993.Vita.Includes bibliographical references (p. 155-170).Michael Jerome Hawley.Ph.D

    Optimizing Human Performance in Mobile Text Entry

    Get PDF
    Although text entry on mobile phones is abundant, research strives to achieve desktop typing performance "on the go". But how can researchers evaluate new and existing mobile text entry techniques? How can they ensure that evaluations are conducted in a consistent manner that facilitates comparison? What forms of input are possible on a mobile device? Do the audio and haptic feedback options with most touchscreen keyboards affect performance? What influences users' preference for one feedback or another? Can rearranging the characters and keys of a keyboard improve performance? This dissertation answers these questions and more. The developed TEMA software allows researchers to evaluate mobile text entry methods in an easy, detailed, and consistent manner. Many in academia and industry have adopted it. TEMA was used to evaluate a typical QWERTY keyboard with multiple options for audio and haptic feedback. Though feedback did not have a significant effect on performance, a survey revealed that users' choice of feedback is influenced by social and technical factors. Another study using TEMA showed that novice users entered text faster using a tapping technique than with a gesture or handwriting technique. This motivated rearranging the keys and characters to create a new keyboard, MIME, that would provide better performance for expert users. Data on character frequency and key selection times were gathered and used to design MIME. A longitudinal user study using TEMA revealed an entry speed of 17 wpm and a total error rate of 1.7% for MIME, compared to 23 wpm and 5.2% for QWERTY. Although MIME's entry speed did not surpass QWERTY's during the study, it is projected to do so after twelve hours of practice. MIME's error rate was consistently low and significantly lower than QWERTY's. In addition, participants found MIME more comfortable to use, with some reporting hand soreness after using QWERTY for extended periods

    Enhanced context-aware framework for individual and crowd condition prediction

    Get PDF
    Context-aware framework is basic context-aware that utilizes contexts such as user with their individual activities, location and time, which are hidden information derived from smartphone sensors. These data are used to monitor a situation in a crowd scenario. Its application using embedded sensors has the potential to monitor tasks that are practically complicated to access. Inaccuracies observed in the individual activity recognition (IAR) due to faulty accelerometer data and data classification problem have led to its inefficiency when used for prediction. This study developed a solution to this problem by introducing a method of feature extraction and selection, which provides a higher accuracy by selecting only the relevant features and minimizing false negative rate (FNR) of IAR used for crowd condition prediction. The approach used was the enhanced context-aware framework (EHCAF) for the prediction of human movement activities during an emergency. Three new methods to ensure high accuracy and low FNR were introduced. Firstly, an improved statistical-based time-frequency domain (SBTFD) representing and extracting hidden context information from sensor signals with improved accuracy was introduced. Secondly, a feature selection method (FSM) to achieve improved accuracy with statistical-based time-frequency domain (SBTFD) and low false negative rate was used. Finally, a method for individual behaviour estimation (IBE) and crowd condition prediction in which the threshold and crowd density determination (CDD) was developed and used, achieved a low false negative rate. The approach showed that the individual behaviour estimation used the best selected features, flow velocity estimation and direction to determine the disparity value of individual abnormality behaviour in a crowd. These were used for individual and crowd density determination evaluation in terms of inflow, outflow and crowd turbulence during an emergency. Classifiers were used to confirm features ability to differentiate individual activity recognition data class. Experimenting SBTFD with decision tree (J48) classifier produced a maximum of 99:2% accuracy and 3:3% false negative rate. The individual classes were classified based on 7 best features, which produced a reduction in dimension, increased accuracy to 99:1% and had a low false negative rate (FNR) of 2:8%. In conclusion, the enhanced context-aware framework that was developed in this research proved to be a viable solution for individual and crowd condition prediction in our society

    Effective Identity Management on Mobile Devices Using Multi-Sensor Measurements

    Get PDF
    Due to the dramatic increase in popularity of mobile devices in the past decade, sensitive user information is stored and accessed on these devices every day. Securing sensitive data stored and accessed from mobile devices, makes user-identity management a problem of paramount importance. The tension between security and usability renders the task of user-identity verification on mobile devices challenging. Meanwhile, an appropriate identity management approach is missing since most existing technologies for user-identity verification are either one-shot user verification or only work in restricted controlled environments. To solve the aforementioned problems, we investigated and sought approaches from the sensor data generated by human-mobile interactions. The data are collected from the on-board sensors, including voice data from microphone, acceleration data from accelerometer, angular acceleration data from gyroscope, magnetic force data from magnetometer, and multi-touch gesture input data from touchscreen. We studied the feasibility of extracting biometric and behaviour features from the on-board sensor data and how to efficiently employ the features extracted to perform user-identity verification on the smartphone device. Based on the experimental results of the single-sensor modalities, we further investigated how to integrate them with hardware such as fingerprint and Trust Zone to practically fulfill a usable identity management system for both local application and remote services control. User studies and on-device testing sessions were held for privacy and usability evaluation.Computer Science, Department o

    Image processing techniques for mixed reality and biometry

    Get PDF
    2013 - 2014This thesis work is focused on two applicative fields of image processing research, which, for different reasons, have become particularly active in the last decade: Mixed Reality and Biometry. Though the image processing techniques involved in these two research areas are often different, they share the key objective of recognizing salient features typically captured through imaging devices. Enabling technologies for augmented/mixed reality have been improved and refined throughout the last years and more recently they seems to have finally passed the demo stage to becoming ready for practical industrial and commercial applications. To this regard, a crucial role will likely be played by the new generation of smartphones and tablets, equipped with an arsenal of sensors connections and enough processing power for becoming the most portable and affordable AR platform ever. Within this context, techniques like gesture recognition by means of simple, light and robust capturing hardware and advanced computer vision techniques may play an important role in providing a natural and robust way to control software applications and to enhance onthe- field operational capabilities. The research described in this thesis is targeted toward advanced visualization and interaction strategies aimed to improve the operative range and robustness of mixed reality applications, particularly for demanding industrial environments... [edited by Author]XIII n.s

    Exploring the Use of Wearables to Enable Indoor Navigation for Blind Users

    Get PDF
    One of the challenges that people with visual impairments (VI) have to have to confront daily, is navigating independently through foreign or unfamiliar spaces.Navigating through unfamiliar spaces without assistance is very time consuming and leads to lower mobility. Especially in the case of indoor environments where the use of GPS is impossible, this task becomes even harder.However, advancements in mobile and wearable computing pave the path to new cheap assistive technologies that can make the lives of people with VI easier.Wearable devices have great potential for assistive applications for users who are blind as they typically feature a camera and support hands and eye free interaction. Smart watches and heads up displays (HUDs), in combination with smartphones, can provide a basis for development of advanced algorithms, capable of providing inexpensive solutions for navigation in indoor spaces. New interfaces are also introduced making the interaction between users who are blind and mo-bile devices more intuitive.This work presents a set of new systems and technologies created to help users with VI navigate indoor environments. The first system presented is an indoor navigation system for people with VI that operates by using sensors found in mo-bile devices and virtual maps of the environment. The second system presented helps users navigate large open spaces with minimum veering. Next a study is conducted to determine the accuracy of pedometry based on different body placements of the accelerometer sensors. Finally, a gesture detection system is introduced that helps communication between the user and mobile devices by using sensors in wearable devices

    ESCOM 2017 Book of Abstracts

    Get PDF

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine
    corecore