29 research outputs found
Designing a Patient-Centered Clinical Workflow to Assess Cyberbully Experiences of Youths in the U.S. Healthcare System
Cyberbullying or online harassment is often defined as when someone repeatedly and intentionally harasses, mistreats, or makes fun of others aiming to scare, anger or shame them using electronic devices [296]. Youths experiencing cyberbullying report higher levels of anxiety and depression, mental distress, suicide thoughts, and substance abuse than their non-bullied peers [360, 605, 261, 354]. Even though bullying is associated with significant health problems, to date, very little youth anti-bullying efforts are initiated and directed in clinical settings. There is presently no standardized procedure or workflow across health systems for systematically assessing cyberbullying or other equally dangerous online activities among vulnerable groups like children or adolescents [599]. Therefore, I developed a series of research projects to link digital indicators of cyberbullying or online harassment to clinical practices by advocating design considerations for a patient-centered clinical assessment and workflow that addresses patientsâ needs and expectations to ensure quality care. Through this dissertation, I aim to answer these high-level research questions:RQ1. How does the presence of severe online harassment on online platforms contribute to negative experiences and risky behaviors within vulnerable populations? RQ2. How efficient is the current mechanism of screening these risky online negative experiences and behaviors, specifically related to cyberbully, within at-risk populations like adolescent in clinical settings? RQ3. How might evidence of activities and negative harassing experiences on online platforms best be integrated into electronic health records during clinical treatment? I first explore how harassment is presented within different social media platforms from diverse contexts and cultural norms (study 1,2, and 3); next, by analyzing actual patient data, I address current limitations in the screening process in clinical settings that fail to efficiently address core aspect of cyberbullying and their consequences within adolescent patients (study 4 and 5); finally, connecting all my findings, I recommend specific design guidelines for a refined screening tool and structured processes for implementation and integration of the screened data into patientsâ electronic health records (EHRs) for better patient assessment and treatment outcomes around cyberbully within adolescent patients (study 6)
Slums on Screen
From Jacob Riisâ How The Other Half Lives (1890) to Danny Boyleâs Slumdog Millionaire (2008), Igor KrstiÄ outlines a transnational history of films that either document or fictionalise the favelas, shantytowns, barrios poulares or chawls of our âplanet of slumsâ
Raphtory: Modelling, Maintenance and Analysis of Distributed Temporal Graphs.
PhD ThesesTemporal graphs capture the development of relationships within data throughout time. This
model ts naturally within a streaming architecture, where new events can be inserted directly
into the graph upon arrival from a data source and be compared to related entities or historical
state. However, the majority of graph processing systems only consider traditional graph analysis
on static data, whilst those which do expand past this often only support batched updating and
delta analysis across graph snapshots. In this work we de ne a temporal property graph model
and the semantics for updating it in both a distributed and non-distributed context. We have
built Raphtory, a distributed temporal graph analytics platform which maintains the full graph
history in memory, leveraging the de ned update semantics to insert streamed events directly into
the model without batching or centralised ordering. In parallel with the ingestion, traditional
and time-aware analytics may be performed on the most up-to-date version of the graph, as
well as any point throughout its history. The depth of history viewed from the perspective of
a time point may also be varied to explore both short and long term patterns within the data.
Through this we extract novel insights over a variety of use cases, including phenomena never
seen before in social networks. Finally, we demonstrate Raphtory's ability to scale both vertically
and horizontally, handling consistent throughput in excess of 100,000 updates a second alongside
the ingestion and maintenance of graphs built from billions of events
KADABRA is an ADaptive Algorithm for Betweenness via Random Approximation
International audienceWe present KADABRA, a new algorithm to approximate betweenness centrality in directed and undirected graphs, which significantly outperforms all previous approaches on real-world complex networks. The efficiency of the new algorithm relies on two new theoretical contributions, of independent interest. The first contribution focuses on sampling shortest paths, a subroutine used by most algorithms that approximate betweenness centrality. We show that, on realistic random graph models, we can perform this task in time |E| 1 2 +o (1) with high probability, obtaining a significant speedup with respect to the Î(|E|) worst-case performance. We experimentally show that this new technique achieves similar speedups on real-world complex networks, as well. The second contribution is a new rigorous application of the adaptive sampling technique. This approach decreases the total number of shortest paths that need to be sampled to compute all betweenness centralities with a given absolute error, and it also handles more general problems, such as computing the k most central nodes. Furthermore, our analysis is general, and it might be extended to other settings
A METHOD TO IMPROVE THE TIME OF COMPUTING BETWEENNESS CENTRALITY IN SOCIAL NETWORK GRAPH
The Betweenness centrality is an important metric in the graph theory and can be applied in the analyzing social network. The main researches about Betweenness centrality often focus on reducing the complexity. Nowadays, the number of users in the social networks is huge. Thus, improving the computing time of Betweenness centrality to apply in the social network is neccessary. In this paper, we propose the algorithm of computing Betweenness centrality by reduce the similar nodes in the graph in order to reducing computing time. Our experiments with graph networks result shows that the computing time of the proposed algorithm is less than Brandes algorithm. The proposed algorithm is compared with the Brandes algorithm [3] in term of execution time
Recommended from our members
Eloquence and Its Conditions
Political rhetoric generally assumes an asymmetric relationship between speaker and audience, but the rhetorical tradition has also developed resources to render this relationship more equitable. One such resource is the conception of the rhetorical situation as one of mutual vulnerability to risk on the part of both speaker and audience. However, this conception is increasingly threatened by âalgorithmicâ practices of political rhetoric that shield elite speakers from exposure to risk, as well as by the overcorrecting reaction to this development seen in the demagogic rhetoric of âunfilteredâ and spontaneous âstraight talk.â Turning to the classical tradition of eloquence can help us recover an alternative to both of these troubling tendencies, which we might call âspontaneous decorum.â This notion of eloquence combines qualities associated with spontaneity, because it welcomes risk and uncertainty as part of public deliberation, with qualities associated with decorum, because it is conceived as set apart from ordinary speech, embracing verbal artifice and rejecting the value of sincerity.
Part 1 of the dissertation considers the development of this model of eloquence in classical Greek and Roman rhetoric. Chapter 1 uses the oratory of Demosthenes, and its reception in antiquity, to critique the notion of sincerity as a warrant of rhetorical truthfulness. Chapter 2 addresses the resistance to the systematization of rhetoric in Cicero and Quintilian. Part 2 of the dissertation considers the continuing relevance of ancient notions of eloquence, investigating ways in which more recent writers have worked to translate them into modern institutional settings. Chapter 3 focuses on Edmund Burkeâs role in the 18th-century reception of classical eloquence; it reconsiders his provocative claim that disruptive speech can act as a spur to sound political judgment, even under rule-bound, constitutional government. Chapter 4 explores the means by which Thomas Babington Macaulay attempted to revive the ancient conviction that history is a branch of rhetoric, arguing that the oratorical coloring of his work can best be understood as a response to the contemporary emergence of mass politics; it also contrasts his historical method with the resolutely anti-rhetorical method of Alexis de Tocqueville. Finally, Chapter 5 considers how Carl Schmitt constructed the contemporary âcrisis of parliamentary democracyâ as a rhetorical crisis, and how his proposed solution to the crisisâtaking seriously the ritual as well as the strictly deliberative aspects of rhetoricâinformed the illiberal turn in his thought; I conclude by arguing that a more nuanced conception of ritual action can better account for the value of stylized speech, is consistent with the classical tradition, and is more potentially compatible with democratic deliberation. While the first part of the dissertation reconstructs a model of eloquence open to both spontaneity and stylization, the second part shows that this model is far from a relic, and that it remains a valuable resource for critiquing the current state of political speech
Recommended from our members
Proposition-based summarization with a coherence-driven incremental model
Summarization models which operate on meaning representations of documents have been neglected in the past, although they are a very promising and interesting class of methods for summarization and text understanding. In this thesis, I present one such summarizer, which uses the proposition as its meaning representation.
My summarizer is an implementation of Kintsch and van Dijk's model of comprehension, which uses a tree of propositions to represent the working memory. The input document is processed incrementally in iterations. In each iteration, new propositions are connected to the tree under the principle of local coherence, and then a forgetting mechanism is applied so that only a few important propositions are retained in the tree for the next iteration. A summary can be generated using the propositions which are frequently retained.
Originally, this model was only played through by hand by its inventors using human-created propositions. In this work, I turned it into a fully automatic model using current NLP technologies. First, I create propositions by obtaining and then transforming a syntactic parse. Second, I have devised algorithms to numerically evaluate alternative ways of adding a new proposition, as well as to predict necessary changes in the tree. Third, I compared different methods of modelling local coherence, including coreference resolution, distributional similarity, and lexical chains.
In the first group of experiments, my summarizer realizes summary propositions by sentence extraction. These experiments show that my summarizer outperforms several state-of-the-art summarizers. The second group of experiments concerns abstractive generation from propositions, which is a collaborative project. I have investigated the option of compressing extracted sentences, but generation from propositions has been shown to provide better information packaging