6,040 research outputs found
Information actors beyond modernity and coloniality in times of climate change:A comparative design ethnography on the making of monitors for sustainable futures in Curaçao and Amsterdam, between 2019-2022
In his dissertation, Mr. Goilo developed a cutting-edge theoretical framework for an Anthropology of Information. This study compares information in the context of modernity in Amsterdam and coloniality in Curaçao through the making process of monitors and develops five ways to understand how information can act towards sustainable futures. The research also discusses how the two contexts, that is modernity and coloniality, have been in informational symbiosis for centuries which is producing negative informational side effects within the age of the Anthropocene. By exploring the modernity-coloniality symbiosis of information, the author explains how scholars, policymakers, and data-analysts can act through historical and structural roots of contemporary global inequities related to the production and distribution of information. Ultimately, the five theses propose conditions towards the collective production of knowledge towards a more sustainable planet
A Holistic Analysis of Internet of Things (IoT) Security : Principles, Practices, and New Perspectives
Peer reviewedPublisher PD
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
Navigating the system vs. changing the system: a comparative analysis of the influence of asset-based and rights-based approaches on the well-being of socio-economic disadvantaged communities in Scotland
Asset-based and rights-based approaches have become leading strategies in Scottish community development. The asset-based approach seeks to help communities develop skills to provide self-help solutions. The rights-based approach seeks to help communities claim rights and make governments more accountable. These two approaches are based on contrasting conceptions of empowerment, employ opposing methods and lead to different outcomes. However, there is no empirical research that has comparatively assessed the two. This thesis represents the first in-depth exploration of the comparative effects of asset-based and rights-based approaches on the well-being of communities experiencing socio-economic disadvantage in Scotland.
The study follows a qualitative design that includes a comparative case study of two projects: the AB project (representing the asset-based approach), and the RB project (representing the rights-based approach). The study also includes the perspectives of a wider pool of practitioners working in a range of community development organisations in Scotland. In total, forty-five participants across seventeen organisations have participated in this study.
To assess the influence of asset-based and rights-based approaches upon well-being, this thesis employs a pluralistic account that combines objective and subjective indicators across three dimensions: material, social and personal. The specific well-being framework employed is the result of combining White’s (2010) well-being framework for the development practice and Oxfam Scotland’s (2013) Humankind Index.
The results of this study indicate that asset-based and rights-based approaches have important contrasting effects on well-being. The asset-based approach seems to have a more positive effect on project participants and across a higher number of well-being indicators. The rights-based approach has more observable effects on material well-being and a higher impact on the wider community, but across fewer indicators.
My findings also suggest that employing these approaches in community development settings brings different advantages and disadvantages. The asset-based approach seems easier to apply and to prove the positive outcomes on those involved. This approach, however, risks sustaining the status quo and, by doing so, misses out the opportunity to achieve more transformational outcomes. The right-based approach seems able to address structural disadvantages more effectively. Yet, it is more difficult to apply and to prove a positive impact. Organisations, practitioners, and communities applying it also face higher costs.
These findings have significant implications at the practice level. Asset-based and rights-based approaches are rarely combined in UK community development settings. As a result, practitioners are often left in the position of having to make a trade-off between helping improve the well-being of project participants and helping improve the well-being of the wider community. In theory, practitioners could avoid this trade- off by combining these approaches. In practice, this is not always possible. Asset-based and rights-based approaches represent opposing theories of change. There are also legal and funding requirements that prevent organisations from following a combination of both. Given this, understanding the comparative impact of applying asset-based and rights-based approaches in community development is critical
A systematic literature review on source code similarity measurement and clone detection: techniques, applications, and challenges
Measuring and evaluating source code similarity is a fundamental software
engineering activity that embraces a broad range of applications, including but
not limited to code recommendation, duplicate code, plagiarism, malware, and
smell detection. This paper proposes a systematic literature review and
meta-analysis on code similarity measurement and evaluation techniques to shed
light on the existing approaches and their characteristics in different
applications. We initially found over 10000 articles by querying four digital
libraries and ended up with 136 primary studies in the field. The studies were
classified according to their methodology, programming languages, datasets,
tools, and applications. A deep investigation reveals 80 software tools,
working with eight different techniques on five application domains. Nearly 49%
of the tools work on Java programs and 37% support C and C++, while there is no
support for many programming languages. A noteworthy point was the existence of
12 datasets related to source code similarity measurement and duplicate codes,
of which only eight datasets were publicly accessible. The lack of reliable
datasets, empirical evaluations, hybrid methods, and focuses on multi-paradigm
languages are the main challenges in the field. Emerging applications of code
similarity measurement concentrate on the development phase in addition to the
maintenance.Comment: 49 pages, 10 figures, 6 table
Eunomia: Enabling User-specified Fine-Grained Search in Symbolically Executing WebAssembly Binaries
Although existing techniques have proposed automated approaches to alleviate
the path explosion problem of symbolic execution, users still need to optimize
symbolic execution by applying various searching strategies carefully. As
existing approaches mainly support only coarse-grained global searching
strategies, they cannot efficiently traverse through complex code structures.
In this paper, we propose Eunomia, a symbolic execution technique that allows
users to specify local domain knowledge to enable fine-grained search. In
Eunomia, we design an expressive DSL, Aes, that lets users precisely pinpoint
local searching strategies to different parts of the target program. To further
optimize local searching strategies, we design an interval-based algorithm that
automatically isolates the context of variables for different local searching
strategies, avoiding conflicts between local searching strategies for the same
variable. We implement Eunomia as a symbolic execution platform targeting
WebAssembly, which enables us to analyze applications written in various
languages (like C and Go) but can be compiled into WebAssembly. To the best of
our knowledge, Eunomia is the first symbolic execution engine that supports the
full features of the WebAssembly runtime. We evaluate Eunomia with a dedicated
microbenchmark suite for symbolic execution and six real-world applications.
Our evaluation shows that Eunomia accelerates bug detection in real-world
applications by up to three orders of magnitude. According to the results of a
comprehensive user study, users can significantly improve the efficiency and
effectiveness of symbolic execution by writing a simple and intuitive Aes
script. Besides verifying six known real-world bugs, Eunomia also detected two
new zero-day bugs in a popular open-source project, Collections-C.Comment: Accepted by ACM SIGSOFT International Symposium on Software Testing
and Analysis (ISSTA) 202
Recommended from our members
Computational Models of Argument Structure and Argument Quality for Understanding Misinformation
With the continuing spread of misinformation and disinformation online, it is of increasing importance to develop combating mechanisms at scale in the form of automated systems that can find checkworthy information, detect fallacious argumentation of online content, retrieve relevant evidence from authoritative sources and analyze the veracity of claims given the retrieved evidence. The robustness and applicability of these systems depend on the availability of annotated resources to train machine learning models in a supervised fashion, as well as machine learning models that capture patterns beyond domain-specific lexical clues or genre-specific stylistic insights. In this thesis, we investigate the role of models for argument structure and argument quality in improving tasks relevant to fact-checking and furthering our understanding of misinformation and disinformation. We contribute to argumentation mining, misinformation detection, and fact-checking by releasing multiple annotated datasets, developing unified models across datasets and task formulations, and analyzing the vulnerabilities of such models in adversarial settings.
We start by studying the argument structure's role in two downstream tasks related to fact-checking. As it is essential to differentiate factual knowledge from opinionated text, we develop a model for detecting the type of news articles (factual or opinionated) using highly transferable argumentation-based features. We also show the potential of argumentation features to predict the checkworthiness of information in news articles and provide the first multi-layer annotated corpus for argumentation and fact-checking.
We then study qualitative aspects of arguments through models for fallacy recognition. To understand the reasoning behind checkworthiness and the relation of argumentative fallacies to fake content, we develop an annotation scheme of fallacies in fact-checked content and investigate avenues for automating the detection of such fallacies considering single- and multi-dataset training. Using instruction-based prompting, we introduce a unified model for recognizing twenty-eight fallacies across five fallacy datasets. We also use this model to explain the checkworthiness of statements in two domains.
Next, we show our models for end-to-end fact-checking of statements that include finding the relevant evidence document and sentence from a collection of documents and then predicting the veracity of the given statements using the retrieved evidence. We also analyze the robustness of end-to-end fact extraction and verification by generating adversarial statements and addressing areas for improvements for models under adversarial attacks. Finally, we show that evidence-based verification is essential for fine-grained claim verification by modeling the human-provided justifications with the gold veracity labels
Detecting Excessive Data Exposures in Web Server Responses with Metamorphic Fuzzing
APIs often transmit far more data to client applications than they need, and
in the context of web applications, often do so over public channels. This
issue, termed Excessive Data Exposure (EDE), was OWASP's third most significant
API vulnerability of 2019. However, there are few automated tools -- either in
research or industry -- to effectively find and remediate such issues. This is
unsurprising as the problem lacks an explicit test oracle: the vulnerability
does not manifest through explicit abnormal behaviours (e.g., program crashes
or memory access violations).
In this work, we develop a metamorphic relation to tackle that challenge and
build the first fuzzing tool -- that we call EDEFuzz -- to systematically
detect EDEs. EDEFuzz can significantly reduce false negatives that occur during
manual inspection and ad-hoc text-matching techniques, the current most-used
approaches.
We tested EDEFuzz against the sixty-nine applicable targets from the Alexa
Top-200 and found 33,365 potential leaks -- illustrating our tool's broad
applicability and scalability. In a more-tightly controlled experiment of eight
popular websites in Australia, EDEFuzz achieved a high true positive rate of
98.65% with minimal configuration, illustrating our tool's accuracy and
efficiency
- …