409 research outputs found
Intergenerational Mobility and Macroeconomic History Dependence
That historical inequality can affect long run macroeconomic performance has been argued by a large literature on âendogenous inequalityâ using models of indivisibilities in occupational choice, in the presence of borrowing constraints. These models are characterized by a continuum of steady states, and absence of mobility in any steady state. We augment such a model with heterogeneity in agentsâ abilities in order to generate occupational mobility in steady state. Steady states with mobility are shown to be generically locally unique and finite in number. We provide forms of heterogeneity for which steady state is globally unique, and others where they are non-unique. Agent heterogeneity may also cause competitive equilibrium dynamics to fail to converge, but convergence can be restored in the presence of sufficient âinertiaâ or occupation switching costs.Intergenerational mobility, occupational choice, human capital, borrowing constraints, inequality, history-dependence
Recommended from our members
Motivating Studentsâ STEM Learning Using Biographical Information
Science instruction has focused on teaching students scientific content knowledge and problem-solving skills. However, even the best content instruction does not guarantee improved learning, as studentsâ motivation ultimately determines whether or not they will take advantage of the content. The goal of our instruction is to address the âleaky STEM pipelineâ problem and retain more students in STEM fields. We designed a struggle-oriented instruction that tells stories about how even the greatest scientists struggled and failed prior to their discoveries. We describe how we have gone about designing this instruction to increase studentsâ motivation and better prepare them to interact and engage with content knowledge. We first discuss why we took this struggle-oriented approach to instruction by delineating the limitations of content-focused science instruction, especially from a motivational standpoint. Second, we detail how we designed and implemented this instruction in schools, outlining the factors that influenced our decisions under specific situational constraints. Finally, we discuss implications for future designers interested in utilizing this approach to instruction
Measuring And Teaching For Success: Intelligence Versus IQ
Optimize action learning and successful evaluation through adopting new views of IQ. IQ as developed here relates to success in life and it is among the most changeable of characteristics. However, IQ as measured in the past is one of the least malleable of factors. Had you rather measure for and teach toward something that is not changeable or something that is very learnable and teachable? If you want to improve success for all in life, forget the normal IQ and begin to use the descriptives you find in this article. The extant literature is replete with theories espousing IQ, EQ, or a combination of both as predictors of success. While the historical importance of IQ as it is currently understood should not be discarded, a more important concept needs to be developed and taught in American educational systems. Simply put, a high IQ does not always correlate with success in life. Yet, our metrics for entry into American universities are principally IQ surrogates. And, our teaching favors those that can remember and pass a test not those that are good at the tasks required by their professions. Academicians need to be more concerned with successful intelligence than traditional IQ for even the most respected of IQ test âfail to do justice to their creatorsâ conceptions of the nature of intelligence (Sternberg, p. 336).â Read on and see if this paper develops a case for changing traditional methods for admission to higher education and teaching toward successful intelligence
ProvMark:A Provenance Expressiveness Benchmarking System
System level provenance is of widespread interest for applications such as
security enforcement and information protection. However, testing the
correctness or completeness of provenance capture tools is challenging and
currently done manually. In some cases there is not even a clear consensus
about what behavior is correct. We present an automated tool, ProvMark, that
uses an existing provenance system as a black box and reliably identifies the
provenance graph structure recorded for a given activity, by a reduction to
subgraph isomorphism problems handled by an external solver. ProvMark is a
beginning step in the much needed area of testing and comparing the
expressiveness of provenance systems. We demonstrate ProvMark's usefuless in
comparing three capture systems with different architectures and distinct
design philosophies.Comment: To appear, Middleware 201
Distributed System Fuzzing
Grey-box fuzzing is the lightweight approach of choice for finding bugs in
sequential programs. It provides a balance between efficiency and effectiveness
by conducting a biased random search over the domain of program inputs using a
feedback function from observed test executions. For distributed system
testing, however, the state-of-practice is represented today by only black-box
tools that do not attempt to infer and exploit any knowledge of the system's
past behaviours to guide the search for bugs.
In this work, we present Mallory: the first framework for grey-box
fuzz-testing of distributed systems. Unlike popular black-box distributed
system fuzzers, such as Jepsen, that search for bugs by randomly injecting
network partitions and node faults or by following human-defined schedules,
Mallory is adaptive. It exercises a novel metric to learn how to maximize the
number of observed system behaviors by choosing different sequences of faults,
thus increasing the likelihood of finding new bugs. The key enablers for our
approach are the new ideas of timeline-driven testing and timeline abstraction
that provide the feedback function guiding a biased random search for failures.
Mallory dynamically constructs Lamport timelines of the system behaviour,
abstracts these timelines into happens-before summaries, and introduces faults
guided by its real-time observation of the summaries.
We have evaluated Mallory on a diverse set of widely-used industrial
distributed systems. Compared to the start-of-the-art black-box fuzzer Jepsen,
Mallory explores more behaviours and takes less time to find bugs. Mallory
discovered 22 zero-day bugs (of which 18 were confirmed by developers),
including 10 new vulnerabilities, in rigorously-tested distributed systems such
as Braft, Dqlite, and Redis. 6 new CVEs have been assigned
Special Libraries, November 1956
Volume 47, Issue 9https://scholarworks.sjsu.edu/sla_sl_1956/1008/thumbnail.jp
PEACE AND CONFLICT IMPACT ASSESSMENT OF THE NIGER DELTA DEVELOPMENT COMMISSIONâS INTER-VENTIONS IN ODI, BAYELSA STATE, NIGERIA
Development interventions are aimed at promoting positive change, but they can equally have negative impact, especially in conflict-prone contexts. Whereas existing studies on Odi and the Niger Delta at large mainly focused on the history, environ-ment, culture, conflict and security situations, the peace and conflict impact of Nige-rian governmentâs socio-economic interventions in the area have not been fully ex-plored. This study, therefore, assessed the Niger Delta Development Commissionâs (NDDC) interventions, to determine their relationship with the Commissionâs man-date, strategies, and community needs; their interactions with the community; and their impact on the dynamics of peace and conflict in Odi, a community that has at-tracted many interventions after the 1999 massacre. The study adopted the grounded theory and case study research designs. Primary and secondary data were collected through key informant and in-depth interviews, official documents and non-participant observation. Fifty-four key informant interviews were conducted with seven members of the Traditional Ruling Council and the Community Development Committee, six religious leaders, five women leaders, five Youth Coun-cil executives, 24 project beneficiaries, 12 NDDC staff, and five NDDC consultants. Forty-seven in-depth interviews were also held with six school teachers, ten politi-cians, and two law enforcement agents in Odi, five international/non-governmental organisations staff, six activists, and eight academics and professionals. The Niger Delta Regional Development Master Plan, the NDDC Act, and website contents were consulted. Non-participant observations were carried out at NDDC project sites in Odi. The data gathered were content analysed. The NDDC integrated development strategy correlated with NDDCâs mandate and peopleâs needs. However, the Commission, in implementing its interventions, contra-vened some of its articulated guiding principles and policies like promoting good gov-ernance, transparency, participatory decision-making, and impact assessment. Also, inadequate community consultation caused dissonance in NDDCâs and communityâs prioritisation of needs. Moreover, due to inadequate consideration for peace and con-flict sensitivity, the interventions produced series of positive and negative impact on peace and conflict dynamics in Odi. Construction of roads and educational facilities, rural electrification and training in modern agricultural practices impacted positively on the structural causes of conflict. They brought federal governmentâs presence to Odi; provided income for male youths employed as labourers and for construction ma-terialsâ suppliers as well as capacity building in modern agricultural practices. How-ever, the community perceived the NDDC interventions as resources and competed for in a socio-political environment characterised by pervasive corruption and bad gov-ernance. This provided sufficient conditions for spirals of negative consequences that ultimately reduced the overall effectiveness of the interventions. The negative impact included entrenching corruption in intervention cycle, power disequilibrium between NDDC and Odi community, oppression and division, and gender inequality, commu-nal conflicts, and apathy. The Niger Delta Development Commissionâs interventions, intended for positive change, also had many negative consequences in Odi because the Commission failed to mainstream peace and conflict sensitivity in the interventions. The NDDC should therefore adhere strictly to its guiding principles and policies as well as international iibest practices in intervention programming in order to maximise the positive and minimise the negative impactsof its intervention
Recommended from our members
Automated Testing and Debugging for Big Data Analytics
The prevalence of big data analytics in almost every large-scale software system has generated a substantial push to build data-intensive scalable computing (DISC) frameworks such as Google MapReduce and Apache Spark that can fully harness the power of existing data centers. However, frameworks once used by domain experts are now being leveraged by data scientists, business analysts, and researchers. This shift in user demographics calls for immediate advancements in the development, debugging, and testing practices of big data applications, which are falling behind compared to the DISC framework design and implementation. In practice, big data applications often fail as users are unable to test all behaviors emerging from interleaving dataflow operators, user-defined functions, and framework's code. "Testing based on a random sample" rarely guarantees the reliability and "trial and error" and "print" debugging methods are expensive and time-consuming. Thus, the current practice of developing a big data application must be improved and the tools built to enhance the developer's productivity must adapt to the distinct characteristics of data-intensive scalable computing. By synthesizing ideas from software engineering and database systems, our hypothesis is that we can design effective and scalable testing and debugging algorithms for big data analytics without compromising the performance and efficiency of the underlying DISC framework. To design such techniques, we investigate how we can build interactive and responsive debugging primitives that significantly reduce the debugging time, yet do not pose much performance overhead on big data applications. Furthermore, we investigate how we can leverage data provenance techniques from databases and fault-isolation algorithms from software engineering to pinpoint the minimal subset of failure-inducing inputs efficiently. To improve the reliability of big data analytics, we investigate how we can abstract the semantics of dataflow operators and use them in tandem with the semantics of user-defined functions to generate a minimum set of synthetic test inputs capable of revealing more defects than the entire input dataset.To examine the first hypothesis, we introduce interactive, real-time debugging primitives for big data analytics through innovative and scalable debugging features such as simulated breakpoint, dynamic watchpoint, and crash culprit identification. Second, we design a new automated fault localization approach that combines insights from both the software engineering and database literature to bring delta debugging closer to a reality in the big data applications by leveraging data provenance and by constructing systems optimizations for debugging provenance queries. Lastly, we devise a new symbolic-execution based white-box testing algorithm for big data applications that abstracts the implementation of dataflow operators using logical specifications instead of modeling their implementations and combines them with the semantics of any arbitrary user-defined function. We instantiate the idea of an interactive debugging algorithm as BigDebug, the idea of an automated debugging algorithm as BigSift, and the idea of symbolic execution-based testing as BigTest. Our investigation shows that the interactive debugging primitives can scale to terabytes---our record-level tracing incurs less than 25% overhead on average and provides up to 100% time saving compared to the baseline replay debugger. Second, we observe that by combining data provenance with delta debugging, we can identify the minimum faulty input in just under 30% of the original job execution time. Lastly, we verify that by abstracting dataflow operators using logical specifications, we can efficiently generate the most concise test data suitable for local testing while revealing twice as many faults as prior approaches. Our investigations collectively demonstrate that developer productivity can be significantly improved through effective and scalable testing and debugging techniques for big data analytics, without impacting the DISC framework's performance. This dissertation affirms the feasibility of automated debugging and testing techniques for big data analytics---techniques that were previously considered infeasible for large-scale data processing
Exploring Perspectives on the Impact of Artificial Intelligence on the Creativity of Knowledge Work: Beyond Mechanised Plagiarism and Stochastic Parrots
Artificial Intelligence (AI), and in particular generative models, are
transformative tools for knowledge work. They problematise notions of
creativity, originality, plagiarism, the attribution of credit, and copyright
ownership. Critics of generative models emphasise the reliance on large amounts
of training data, and view the output of these models as no more than
randomised plagiarism, remix, or collage of the source data. On these grounds,
many have argued for stronger regulations on the deployment, use, and
attribution of the output of these models. However, these issues are not new or
unique to artificial intelligence. In this position paper, using examples from
literary criticism, the history of art, and copyright law, I show how
creativity and originality resist definition as a notatable or
information-theoretic property of an object, and instead can be seen as the
property of a process, an author, or a viewer. Further alternative views hold
that all creative work is essentially reuse (mostly without attribution), or
that randomness itself can be creative. I suggest that creativity is ultimately
defined by communities of creators and receivers, and the deemed sources of
creativity in a workflow often depend on which parts of the workflow can be
automated. Using examples from recent studies of AI in creative knowledge work,
I suggest that AI shifts knowledge work from material production to critical
integration. This position paper aims to begin a conversation around a more
nuanced approach to the problems of creativity and credit assignment for
generative models, one which more fully recognises the importance of the
creative and curatorial voice of the users of these models and moves away from
simpler notational or information-theoretic views.Comment: Advait Sarkar. 2023. Exploring Perspectives on the Impact of
Artificial Intelligence on the Creativity of Knowledge Work Beyond Mechanised
Plagiarism and Stochastic Parrots. In Annual Symposium on Human-Computer
Interaction for Work 2023 (CHIWORK 2023), June 13-16, 2023, Oldenburg,
Germany. ACM, New York, NY, USA, 17 page
- âŠ