4 research outputs found
Using Mobile Data and Deep Models to Assess Auditory Verbal Hallucinations
Hallucination is an apparent perception in the absence of real external
sensory stimuli. An auditory hallucination is a perception of hearing sounds
that are not real. A common form of auditory hallucination is hearing voices in
the absence of any speakers which is known as Auditory Verbal Hallucination
(AVH). AVH is fragments of the mind's creation that mostly occur in people
diagnosed with mental illnesses such as bipolar disorder and schizophrenia.
Assessing the valence of hallucinated voices (i.e., how negative or positive
voices are) can help measure the severity of a mental illness. We study N=435
individuals, who experience hearing voices, to assess auditory verbal
hallucination. Participants report the valence of voices they hear four times a
day for a month through ecological momentary assessments with questions that
have four answering scales from ``not at all'' to ``extremely''. We collect
these self-reports as the valence supervision of AVH events via a mobile
application. Using the application, participants also record audio diaries to
describe the content of hallucinated voices verbally. In addition, we passively
collect mobile sensing data as contextual signals. We then experiment with how
predictive these linguistic and contextual cues from the audio diary and mobile
sensing data are of an auditory verbal hallucination event. Finally, using
transfer learning and data fusion techniques, we train a neural net model that
predicts the valance of AVH with a performance of 54\% top-1 and 72\% top-2 F1
score
Improving the Efficacy of Context-Aware Applications
In this dissertation, we explore methods for enhancing the context-awareness capabilities of modern computers, including mobile devices, tablets, wearables, and traditional computers. Advancements include proposed methods for fusing information from multiple logical sensors, localizing nearby objects using depth sensors, and building models to better understand the content of 2D images.
First, we propose a system called Unagi, designed to incorporate multiple logical sensors into a single framework that allows context-aware application developers to easily test new ideas and create novel experiences. Unagi is responsible for collecting data, extracting features, and building personalized models for each individual user. We demonstrate the utility of the system with two applications: adaptive notification filtering and a network content prefetcher. We also thoroughly evaluate the system with respect to predictive accuracy, temporal delay, and power consumption.
Next, we discuss a set of techniques that can be used to accurately determine the location of objects near a user in 3D space using a mobile device equipped with both depth and inertial sensors. Using a novel chaining approach, we are able to locate objects farther away than the standard range of the depth sensor without compromising localization accuracy. Empirical testing shows our method is capable of localizing objects 30m from the user with an error of less than 10cm.
Finally, we demonstrate a set of techniques that allow a multi-layer perceptron (MLP) to learn resolution-invariant representations of 2D images, including the proposal of an MCMC-based technique to improve the selection of pixels for mini-batches used for training. We also show that a deep convolutional encoder could be trained to output a resolution-independent representation in constant time, and we discuss several potential applications of this research, including image resampling, image compression, and security