24,450 research outputs found
Explanation and trust: what to tell the user in security and AI?
There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, the goal of explanation is to acquire or maintain the users' trust. In this paper, we investigate the relation between explanation and trust in the context of computing science. This analysis draws on literature study and concept analysis, using elements from system theory as well as actor-network theory. We apply the conceptual framework to both AI and information security, and show the benefit of the framework for both fields by means of examples. The main focus is on expert systems (AI) and electronic voting systems (security). Finally, we discuss consequences of our analysis for ethics in terms of (un)informed consent and dissent, and the associated division of responsibilities
Natural language processing
Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems
Guidelines for the Provision of Garbage Reception Facilities at Ports Under MARPOL Annex V
This report offers guidelines for the provision of adequate
port reception facilities for vessel-generated garbage
under the requirements of Annex V of the International
Convention for the Prevention of Pollution From Ships, 1973 (MARPOL 73/78), Regulations for the Prevention of Pollution by Garbage from Ships. MARPOL Annex V prohibits at-sea disposal of plastic materials from vessels, and specifies the distance from shore at which other materials may be dumped. Annex V also requires the provision of port reception facilities for garbage, but it does not specify these facilities or how they are to be provided. Since the at-sea dumping restrictions apply to all vessels, the reception facility requirement applies to all ports, terminals, and marinas that serve vessels. These guidelines were prepared to assist port owners and operators in meeting their obligation to provide adequate reception facilities for garbage. The report synthesizes available information and draws upon experience from the first years ofimplementation of MARPOL Annex V. (PDF file contains 55 pages.
Special Libraries, December 1962
Volume 53, Issue 10https://scholarworks.sjsu.edu/sla_sl_1962/1009/thumbnail.jp
Special Libraries, October 1963
Volume 54, Issue 8https://scholarworks.sjsu.edu/sla_sl_1963/1007/thumbnail.jp
Recommended from our members
Police Knowledge Exchange: Full Report 2018
[Executive Summary]
This report was commissioned to explore the enablers and barriers to sharing within and between police forces and between police forces and partners, including the public. This was completed from an interdisciplinary review of international literature covering sharing, knowledge exchange, learning and organisational learning. The literature broke down into four main factors; who, why, what and how. An introduction to the literature is presented with ‘Who’ is sharing which considers both personal identity and different institutional issues. The ‘Why’ literature covers issues of cultural and community motivators and barriers. The ‘What’ segment reviews concepts of data, information and knowledge and related legislative issues. Finally, the ‘how’ section spans face to face sharing approaches to technologies that produce both enablers and barriers. A series of 42 in-depth interviews and focus groups were completed and combined with 47 survey responses . The aim of the interviews, focus groups and survey was to show perceptions and beliefs around knowledge sharing from a small sample across policing in order to complement the findings from the literature review.
The survey was adapted from a standardised questionnaire (Biggs, 1987). The Biggs questionnaire focused on what motivated students to learn and how they approached their learning. Our adapted survey looked at what motivated police to share, and how they approached sharing. The responses showed a trend, across the police, towards a motivation for sharing to develop a deeper understanding of issues. However, the approaches and the strategies they used to share with others, which were primarily driven by achieving and surface approaches (to get promoted and get the job done). According to Biggs (1987) this could leave them discontented as they never progress to a deeper understanding of issues. Scaffolding sharing within the police through processes that are clearly defined, effective and valued could help to overcome these issues.
Within the interviews and focus group findings a similar structured approach to sharing was adopted. Within the ‘who’ section some key aspects around personal relationships, reciprocity and reputation were identified. The ‘why’ the police share was one of the largest discussion points. Not only was there a deep motivation to solve key policing issues there was an approach of reciprocity. Police sharing was deeply motivated to support ‘good practice’ in the prevention and detection of crime. However, a sharing barrier was identified in the parity of value given to different types of knowledge for example between professional judgement and research evidence knowledge. Sharing was achieved when there were reciprocal benefits, in particular with personal networks or face to face sharing which was noted as ‘safe’. Again, this was inhibited by misunderstandings around the ‘risks’ of sharing, frequently attributed to data protection legislation; producing cautious reactions and as an avoidance tactic to save time and effort sharing. However, a divide was noted between technical users and those who avoided any online systems for sharing; often due to poorly designed systems and a lack of confidence in how to use systems. The police culture was identified as being risk-adverse, and competitive due to multiple factors, a lack of supported time to share, Her Majesty’s Inspectorate of Constabulary (HMIC) reviews and promotion criteria. The result was perceived to be a poor cultural ability to learn from mistakes and a likelihood to repeat errors.
A set of strategic recommendations are given and include the use of a sharing authorised professional practice for HMIC reviews, sharing networks and training. A further set of operational recommendations are given such as; sharing impact cases for evidence based practice, data sharing officers and evaluating mechanisms for sharing.
This full report is supported by the Police Knowledge Exchange Summary Report 2018 which gives an overview of the findings and recommendations
Impact Evaluations and Development: Nonie Guidance on Impact Evaluation
In international development, impact evaluation is principally concerned with final results of interventions (programs, projects, policy measures, reforms) on the welfare of communities, households, and individuals, including taxpayers and voters. Impact evaluation is one tool within the larger toolkit of monitoring and evaluation (including broad program evaluations, process evaluations, ex ante studies, etc.).The Network of Networks for Impact Evaluation (NONIE) was established in 2006 to foster more and better impact evaluations by its membership -- the evaluation networks of bilateral and multilateral organizations focusing on development issues, as well as networks of developing country evaluators. NONIE's member networks conduct a broad set of evaluations, examining issues such as project and strategy performance, institutional development, and aid effectiveness. By sharing methodological approaches and promoting learning by doing on impact evaluations, NONIE aims to promote the use of this more specific approach by its members within their larger portfolio of evaluations. This document, by Frans Leeuw and Jos Vaessen, has been developed to support this focus.For development practitioners, impact evaluations play a keyrole in the drive for better evidence on results and development effectiveness. They are particularly well suited to answer important questions about whether development interventions do or do not work, whether they make a difference, and how cost-effective they are. Consequently, they can help ensure that scarce resources are allocated where they can have the most developmental impact
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
- …