2,677 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
The use of proxies in designing for and with autistic children: supporting friendship as a case study
Participatory Design (PD) is an approach for designing new technologies which involves end users in the design process. It is generally accepted that involving users in the design process gives them a sense of ownership over the final product which enhances its usability and acceptance by the target population. Employing a PD approach can introduce multiple challenges especially when working with autistic children. Many approaches for involving autistic children and children with special needs were developed to address these challenges. However, these frameworks introduce their own limitations as well. There is an ethical dilemma to consider in the involvement of autistic children in the design process. Although we established the ethical benefit of involving children, we did not address the ethical issues that will result from involving them in these research projects. Among other issues, the nature of design workshops we as a community currently run require working with unfamiliar researchers and communicating with them while social and communication differences are one of the main diagnostic criteria for autism. When designing for autistic children and other vulnerable populations an alternative (or most often an additional) approach is designing with proxies. Proxies for the child can be one of several groups of other stakeholders, such as: teachers, parents and siblings. Each of these groups may inform the design process, from their particular perspective, and as proxies for the target group of autistic children. Decisions need to be made about what stages in the design process are suited to their participation, and the role they play in each case. For this reason, we explore the role of teachers, parents, autistic adults and neurotypical children as proxies in the design process.
To explore the roles of proxies we chose friendship between autistic and neurotypical children as the context we are designing for. We are interested in understanding the nature of children's friendships and the potential for technology to support them. Although children themselves are the ones who experience friendship and challenges around its development and peer interaction, they might find it difficult to articulate the challenges they face. Furthermore, it is unrealistic to expect children to identify strategies to help them overcome the challenges with friendship development that they are facing as it assumes children have the social skills to come up with these strategies in the first place. Hence, it is necessary in this context to consider proxies who can identify challenges and suggest ways to overcome them
Federated learning framework and energy disaggregation techniques for residential energy management
Residential energy use is a significant part of total power usage in developed countries. To reduce overall
energy use and save funds, these countries need solutions that help them keep track of how different
appliances are used at residences. Non-Intrusive Load Monitoring (NILM) or energy disaggregation
is a method for calculating individual appliance power consumption from a single meter tracking the
aggregated power of several appliances. To implement any NILM approach in the real world, it is
necessary to collect massive amounts of data from individual residences and transfer them to centralized
servers, where they will undergo extensive analysis. The centralized fashion of this procedure makes it
time-consuming and costly since transferring the data from thousands of residences to the central server
takes a lot of time and storage. This thesis proposes utilizing Federated Learning (FL) framework for
NILM in order to make the entire system cost-effective and efficient. Rather than collecting data from
all clients (residences) and sending it back to the central server, local models are generated on each
clientâs end and trained on local data in FL. This allows FL to respond more quickly to changes in the
environment and handle data locally in a single household, increasing the systemâs speed. On top of
that, without any data transfer, FL prevents data leakage and preserves the clientsâ privacy, leading
to a safe and trustworthy system. For the first time, in this work, the performance of deploying FL
in NILM was investigated with two different energy disaggregation models: Short Sequence-to-Point
(Seq2Point) and Variational Auto-Encoder (VAE). Short Seq2Point with fewer samples as input window
for each appliance, tries to simulate the real-time energy disaggregation for the different appliances.
Despite having a light-weighted model, Short Seq2Point lacks generalizability and might confront some
challenges while disaggregating multi-state appliances
Improving the Academic Success of Technical College Students with Disabilities: A Multisite Descriptive Case Study
Students with disabilities in higher education have lower retention and graduation rates than students without disabilities. While postsecondary administrators are attempting to meet the needs of students by implementing necessary reforms, barriers remain like issues with disclosure, transition planning, and faculty knowledge. This present qualitative descriptive case study sought to explore the instructional practices that were implemented by technical college educators to accommodate students with learning challenges, including students with disabilities, utilizing the Universal Design for Learning framework to determine which current technical college faculty instructional accommodations practices intersect with or diverge from Universal Design for Learning principles. The participants were a purposeful sample of 12 full-time technical college faculty members from six technical colleges in a southern state with at least five years of teaching experience at the postsecondary level and had worked with at least one student with a disability. Data were collected in three phases through the Universal Design for Learning Checklist, Semi-structured Interviews, and Document Analysis of course syllabi. Frequency counts and thematic analysis were utilized to analyze the data. This qualitative research has implications for identifying consistent and best instructional practices that positively impact the academic achievement of college students with disabilities. The findings indicated that technical college faculty have been implementing Universal Design for Learning instructional strategies, both intentionally and unknowingly, in an attempt to provide equitable access to all students regardless of ability and that technical college students can benefit from the implementation of Universal Design for Learning principles into college courses. The findings also implied that professional development training can become a vital aspect of instructors\u27 improvement programs to enlighten them about strategies that are available to improve their work with students with disabilities
Towards compact bandwidth and efficient privacy-preserving computation
In traditional cryptographic applications, cryptographic mechanisms are employed to ensure the security and integrity of communication or storage. In these scenarios, the primary threat is usually an external adversary trying to intercept or tamper with the communication between two parties. On the other hand, in the context of privacy-preserving computation or secure computation, the cryptographic techniques are developed with a different goal in mind: to protect the privacy of the participants involved in a computation from each other. Specifically, privacy-preserving computation allows multiple parties to jointly compute a function without revealing their inputs and it has numerous applications in various fields, including finance, healthcare, and data analysis. It allows for collaboration and data sharing without compromising the privacy of sensitive data, which is becoming increasingly important in today's digital age. While privacy-preserving computation has gained significant attention in recent times due to its strong security and numerous potential applications, its efficiency remains its Achilles' heel. Privacy-preserving protocols require significantly higher computational overhead and bandwidth when compared to baseline (i.e., insecure) protocols. Therefore, finding ways to minimize the overhead, whether it be in terms of computation or communication, asymptotically or concretely, while maintaining security in a reasonable manner remains an exciting problem to work on. This thesis is centred around enhancing efficiency and reducing the costs of communication and computation for commonly used privacy-preserving primitives, including private set intersection, oblivious transfer, and stealth signatures. Our primary focus is on optimizing the performance of these primitives.Im Gegensatz zu traditionellen kryptografischen Aufgaben, bei denen Kryptografie verwendet wird, um die Sicherheit und IntegritĂ€t von Kommunikation oder Speicherung zu gewĂ€hrleisten und der Gegner typischerweise ein AuĂenstehender ist, der versucht, die Kommunikation zwischen Sender und EmpfĂ€nger abzuhören, ist die Kryptografie, die in der datenschutzbewahrenden Berechnung (oder sicheren Berechnung) verwendet wird, darauf ausgelegt, die PrivatsphĂ€re der Teilnehmer voreinander zu schĂŒtzen. Insbesondere ermöglicht die datenschutzbewahrende Berechnung es mehreren Parteien, gemeinsam eine Funktion zu berechnen, ohne ihre Eingaben zu offenbaren. Sie findet zahlreiche Anwendungen in verschiedenen Bereichen, einschlieĂlich Finanzen, Gesundheitswesen und Datenanalyse. Sie ermöglicht eine Zusammenarbeit und Datenaustausch, ohne die PrivatsphĂ€re sensibler Daten zu kompromittieren, was in der heutigen digitalen Ăra immer wichtiger wird. Obwohl datenschutzbewahrende Berechnung aufgrund ihrer starken Sicherheit und zahlreichen potenziellen Anwendungen in jĂŒngster Zeit erhebliche Aufmerksamkeit erregt hat, bleibt ihre Effizienz ihre Achillesferse. Datenschutzbewahrende Protokolle erfordern deutlich höhere Rechenkosten und Kommunikationsbandbreite im Vergleich zu Baseline-Protokollen (d.h. unsicheren Protokollen). Daher bleibt es eine spannende Aufgabe, Möglichkeiten zu finden, um den Overhead zu minimieren (sei es in Bezug auf Rechen- oder Kommunikationsleistung, asymptotisch oder konkret), wĂ€hrend die Sicherheit auf eine angemessene Weise gewĂ€hrleistet bleibt. Diese Arbeit konzentriert sich auf die Verbesserung der Effizienz und Reduzierung der Kosten fĂŒr Kommunikation und Berechnung fĂŒr gĂ€ngige datenschutzbewahrende Primitiven, einschlieĂlich private Schnittmenge, vergesslicher Transfer und Stealth-Signaturen. Unser Hauptaugenmerk liegt auf der Optimierung der Leistung dieser Primitiven
Exploring perceptions of interreligious learning and teaching and the interplay with religious identity
Toni Foley explored perceptions of interreligious learning and teaching starting with self, extending to other adults and then to participants in a Catholic School. The study revealed the importance of leadership, all voices counting, and an educational frame for learning. Resonances could assist all schools to work towards solidarity and a 'civilisation of love'
From Zero to Hero: Detecting Leaked Data through Synthetic Data Injection and Model Querying
Safeguarding the Intellectual Property (IP) of data has become critically
important as machine learning applications continue to proliferate, and their
success heavily relies on the quality of training data. While various
mechanisms exist to secure data during storage, transmission, and consumption,
fewer studies have been developed to detect whether they are already leaked for
model training without authorization. This issue is particularly challenging
due to the absence of information and control over the training process
conducted by potential attackers.
In this paper, we concentrate on the domain of tabular data and introduce a
novel methodology, Local Distribution Shifting Synthesis (\textsc{LDSS}), to
detect leaked data that are used to train classification models. The core
concept behind \textsc{LDSS} involves injecting a small volume of synthetic
data--characterized by local shifts in class distribution--into the owner's
dataset. This enables the effective identification of models trained on leaked
data through model querying alone, as the synthetic data injection results in a
pronounced disparity in the predictions of models trained on leaked and
modified datasets. \textsc{LDSS} is \emph{model-oblivious} and hence compatible
with a diverse range of classification models, such as Naive Bayes, Decision
Tree, and Random Forest. We have conducted extensive experiments on seven types
of classification models across five real-world datasets. The comprehensive
results affirm the reliability, robustness, fidelity, security, and efficiency
of \textsc{LDSS}.Comment: 13 pages, 11 figures, and 4 table
Security and Authenticity of AI-generated code
The intersection of security and plagiarism in the context of AI-generated code is a critical theme through-
out this study. While our research primarily focuses on evaluating the security aspects of AI-generated code,
it is imperative to recognize the interconnectedness of security and plagiarism concerns. On the one hand,
we do an extensive analysis of the security flaws that might be present in AI-generated code, with a focus
on code produced by ChatGPT and Bard. This analysis emphasizes the dangers that might occur if such
code is incorporated into software programs, especially if it has security weaknesses. This directly affects
developers, advising them to use caution when thinking about integrating AI-generated code to protect the
security of their applications. On the other hand, our research also covers code plagiarism. In the context
of AI-generated code, plagiarism, which is defined as the reuse of code without proper attribution or in
violation of license and copyright restrictions, becomes a significant concern. As open-source software and
AI language models proliferate, the risk of plagiarism in AI-generated code increases. Our research combines
code attribution techniques to identify the authors of AI-generated insecure code and identify where the code
originated. Our research emphasizes the multidimensional nature of AI-generated code and its wide-ranging
repercussions by addressing both security and plagiarism issues at the same time. This complete approach
adds to a more profound understanding of the problems and ethical implications associated with the use of
AI in code generation, embracing both security and authorship-related concerns
- âŠ