20,535 research outputs found
Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered
RiPLE: Recommendation in Peer-Learning Environments Based on Knowledge Gaps and Interests
Various forms of Peer-Learning Environments are increasingly being used in
post-secondary education, often to help build repositories of student generated
learning objects. However, large classes can result in an extensive repository,
which can make it more challenging for students to search for suitable objects
that both reflect their interests and address their knowledge gaps. Recommender
Systems for Technology Enhanced Learning (RecSysTEL) offer a potential solution
to this problem by providing sophisticated filtering techniques to help
students to find the resources that they need in a timely manner. Here, a new
RecSysTEL for Recommendation in Peer-Learning Environments (RiPLE) is
presented. The approach uses a collaborative filtering algorithm based upon
matrix factorization to create personalized recommendations for individual
students that address their interests and their current knowledge gaps. The
approach is validated using both synthetic and real data sets. The results are
promising, indicating RiPLE is able to provide sensible personalized
recommendations for both regular and cold-start users under reasonable
assumptions about parameters and user behavior.Comment: 25 pages, 7 figures. The paper is accepted for publication in the
Journal of Educational Data Minin
IoT Sentinel: Automated Device-Type Identification for Security Enforcement in IoT
With the rapid growth of the Internet-of-Things (IoT), concerns about the
security of IoT devices have become prominent. Several vendors are producing
IP-connected devices for home and small office networks that often suffer from
flawed security designs and implementations. They also tend to lack mechanisms
for firmware updates or patches that can help eliminate security
vulnerabilities. Securing networks where the presence of such vulnerable
devices is given, requires a brownfield approach: applying necessary protection
measures within the network so that potentially vulnerable devices can coexist
without endangering the security of other devices in the same network. In this
paper, we present IOT SENTINEL, a system capable of automatically identifying
the types of devices being connected to an IoT network and enabling enforcement
of rules for constraining the communications of vulnerable devices so as to
minimize damage resulting from their compromise. We show that IOT SENTINEL is
effective in identifying device types and has minimal performance overhead
Toward Network-based DDoS Detection in Software-defined Networks
To combat susceptibility of modern computing systems to cyberattack, identifying and disrupting malicious traffic without human intervention is essential. To accomplish this, three main tasks for an effective intrusion detection system have been identified: monitor network traffic, categorize and identify anomalous behavior in near real time, and take appropriate action against the identified threat. This system leverages distributed SDN architecture and the principles of Artificial Immune Systems and Self-Organizing Maps to build a network-based intrusion detection system capable of detecting and terminating DDoS attacks in progress
Supplement to MTI Study on Selective Passenger Screening in the Mass Transit Rail Environment, MTI Report 09-05
This supplement updates and adds to MTIs 2007 report on Selective Screening of Rail Passengers (Jenkins and Butterworth MTI 07-06: Selective Screening of Rail Passengers). The report reviews current screening programs implemented (or planned) by nine transit agencies, identifying best practices. The authors also discuss why three other transit agencies decided not to implement passenger screening at this time. The supplement reconfirms earlier conclusions that selective screening is a viable security option, but that effective screening must be based on clear policies and carefully managed to avoid perceptions of racial or ethnic profiling, and that screening must have public support. The supplement also addresses new developments, such as vapor-wake detection canines, continuing challenges, and areas of debate. Those interested should also read MTI S-09-01 Rail Passenger Selective Screening Summit
Widening the Lens on Boys and Men of Color
Current philanthropic initiatives on boys and men of color use research that often fails to disaggregate the "Asian" category, and disadvantaged AAPI and AMEMSA boys and men are often excluded from these funding initiatives. In response to AAPI and AMEMSA organizations' concerns about the lack of attention to boys and men in their communities, AAPIP undertook a community-based research effort as an initial step towards building knowledge within philanthropy about AAPI and AMEMSA boys and men of color.
- …