54,104 research outputs found
Innovation and the productivity challenge in the public sector
Evidence-based policymaking needs to be counter-balanced with intelligence-based policymaking, the Executive Director of the HC Coombs Policy Forum told an audience of senior public servants today.
Dr Mark Matthews used an address to the inaugural Policy Reflections Forum at the Department of Communications to suggest that the public service consider the concept of intelligence-based policymaking as a means of crafting quicker policy responses when information is partial or incomplete. Intelligence-based policymaking involves tests of competing hypotheses and is used widely by the intelligence community to inform decision-making when a shortage of time means that the accumulation of robust evidence is a challenge.
Matthews stressed that governments frequently had to make fast decisions on issues with considerable uncertainty over cause and effect, so in some circumstances the steady accumulation of information associated with evidence-based policymaking needs to be complemented with a faster approach. He added that there are a many public policy challenges that stand to benefit from the use of intelligence-based policymaking.
âIntelligence-based policymaking has been explicitly designed to handle decision-making under conditions of substantive uncertainty, ambiguity and risk â situations in which there may be no option to wait before more evidence is available before deciding what to do about a possible threat.
âI think thereâs a compelling argument [to use intelligence-based policymaking] because it may be a faster, cheaper and a more âfit for purposeâ approach to formulating policy.
âA transition to intelligence-based policymaking may be the step change in public sector productivity that we are searching for â simply because it involves much lower levels of wasted person-hoursâŚand lower risks of wasted spending on intervention designs and the monitoring and evaluation of this spending that does not align with the reality that governments are the uncertainty and risk managers of last resort,â he said.
He added that another advantage of intelligence-based policymaking is that it is better positioned to handle the possible unhelpful reactions of those groups a piece of policy is aimed at.
âIf I release an evidence-based assessment of a policy challenge â such as social policy or business regulation â it is likely that the behavior of the actors and entities whose behaviors constitute the policy challenge may change in response to their improved understanding of what government plans to do in the future. There are many examples of this.â
Matthews leads the HC Coombs Policy Forum at Crawford School. The Forum is a collaboration between the Australian Government and The Australian National University with a mission to support innovative and experimental work at the interface between the public service and academia. His speech builds on an earlier keynote address calling for policymakers and academics to move beyond evidence-based policymaking: https://crawford.anu.edu.au/news/1637/building-better-partnerships
Matthewsâ speech to the Department of Communications, Innovation and the productivity challenge in the public sector is available for download on his website: http://marklmatthews.com/2014/03/05/talk-on-innovation-and-the-productiv..
Semantic Ambiguity and Perceived Ambiguity
I explore some of the issues that arise when trying to establish a connection
between the underspecification hypothesis pursued in the NLP literature and
work on ambiguity in semantics and in the psychological literature. A theory of
underspecification is developed `from the first principles', i.e., starting
from a definition of what it means for a sentence to be semantically ambiguous
and from what we know about the way humans deal with ambiguity. An
underspecified language is specified as the translation language of a grammar
covering sentences that display three classes of semantic ambiguity: lexical
ambiguity, scopal ambiguity, and referential ambiguity. The expressions of this
language denote sets of senses. A formalization of defeasible reasoning with
underspecified representations is presented, based on Default Logic. Some
issues to be confronted by such a formalization are discussed.Comment: Latex, 47 pages. Uses tree-dvips.sty, lingmacros.sty, fullname.st
Data-oriented parsing and the Penn Chinese treebank
We present an investigation into parsing the Penn Chinese Treebank using a Data-Oriented Parsing (DOP) approach. DOP
comprises an experience-based approach to natural language parsing. Most published research in the DOP framework uses PStrees as its representation schema. Drawbacks of the DOP approach centre around issues of efficiency. We incorporate recent advances in DOP parsing techniques into a novel DOP parser which generates a compact representation of all subtrees which can be derived from any full parse tree.
We compare our work to previous work on parsing the Penn Chinese Treebank, and provide both a quantitative and qualitative evaluation. While our results in terms of Precision and Recall are slightly below those published in related research, our approach requires no manual encoding of head rules, nor is a development phase per se necessary.
We also note that certain constructions which were problematic in this previous work can be handled correctly by our DOP parser. Finally, we observe that the âDOP Hypothesisâ is confirmed for parsing the Penn Chinese Treebank
A Survey of Location Prediction on Twitter
Locations, e.g., countries, states, cities, and point-of-interests, are
central to news, emergency events, and people's daily lives. Automatic
identification of locations associated with or mentioned in documents has been
explored for decades. As one of the most popular online social network
platforms, Twitter has attracted a large number of users who send millions of
tweets on daily basis. Due to the world-wide coverage of its users and
real-time freshness of tweets, location prediction on Twitter has gained
significant attention in recent years. Research efforts are spent on dealing
with new challenges and opportunities brought by the noisy, short, and
context-rich nature of tweets. In this survey, we aim at offering an overall
picture of location prediction on Twitter. Specifically, we concentrate on the
prediction of user home locations, tweet locations, and mentioned locations. We
first define the three tasks and review the evaluation metrics. By summarizing
Twitter network, tweet content, and tweet context as potential inputs, we then
structurally highlight how the problems depend on these inputs. Each dependency
is illustrated by a comprehensive review of the corresponding strategies
adopted in state-of-the-art approaches. In addition, we also briefly review two
related problems, i.e., semantic location prediction and point-of-interest
recommendation. Finally, we list future research directions.Comment: Accepted to TKDE. 30 pages, 1 figur
- âŚ