2,819 research outputs found
Nondiscrimination Implications of Federal Involvement in Housing
Government enforcement of equal opportunity in all housing, or in all housing connected with the various federal programs discussed in this note, would virtually eliminate the present fear among many whites that the presence of Negroes in the community hurts property values. Even if there were any basis to such a fear, the fact that Negroes had an easily enforceable right to purchase property in all neighborhoods would tend to prevent such price devaluation since the actual presence of Negroes in more and more areas would eventually make all white neighborhoods non-existent. Moreover,the fact that all, or the vast majority of, mortgage-lending institutions, builders and homeowners would be compelled to observe nondiscrimination in housing would foreclose any possible loss of business to the individual institutions, builders, and home owners since there would be no alternative source of supply. Now, before the adoption of these nondiscrimination requirements, the imposition of such requirements may well seem a frightening and dangerous exercise of governmental power in derogation of individual rights. But, as was the case with the accommodations and employment sections of the Civil Rights Act of 1964, after passage, people will look back and realize that the fears were exaggerated and that the overall effect of the legislation is highly beneficial to the country\u27s welfare
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
Effectiveness evaluation of STOL transport operations (phase 2)
A computer simulation program which models a commercial short-haul aircraft operating in the civil air system was developed. The purpose of the program is to evaluate the effect of a given aircraft avionics capability on the ability of the aircraft to perform on-time carrier operations. The program outputs consist primarily of those quantities which can be used to determine direct operating costs. These include: (1) schedule reliability or delays, (2) repairs/replacements, (3) fuel consumption, and (4) cancellations. More comprehensive models of the terminal area environment were added and a simulation of an existing airline operation was conducted to obtain a form of model verification. The capability of the program to provide comparative results (sensitivity analysis) was then demonstrated by modifying the aircraft avionics capability for additional computer simulations
A Compositional Object-Based Approach to Learning Physical Dynamics
We present the Neural Physics Engine (NPE), an object-based neural network architecture for learning predictive models of intuitive physics. We propose a factorization of a physical scene into composable object-based representations and also the NPE architecture whose compositional structure factorizes object dynamics into pairwise interactions. Our approach draws on the strengths of both symbolic and neural approaches: like a symbolic physics engine, the NPE is endowed with generic notions of objects and their interactions, but as a neural network it can also be trained via stochastic gradient descent to adapt to specific object properties and dynamics of different worlds. We evaluate the efficacy of our approach on simple rigid body dynamics in two-dimensional worlds. By comparing to less structured architectures, we show that our model's compositional representation of the structure in physical interactions improves its ability to predict movement, generalize to different numbers of objects, and infer latent properties of objects such as mass.National Science Foundation (U.S.) (Award CCF-1231216)United States. Office of Naval Research (Grant N00014-16-1-2007
Learning a theory of causality
The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework and the role for innate structure. We focus on knowledge about causality, seen as a domain-general intuitive theory, and ask whether this knowledge can be learned from co-occurrence of events. We begin by phrasing the causal Bayes nets theory of causality and a range of alternatives in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned—an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence and find that a collection of simple perceptual input analyzers can help to bootstrap abstract knowledge. Together, these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion.James S. McDonnell Foundation (Causal Learning Collaborative Initiative)United States. Office of Naval Research (Grant N00014-09-0124)United States. Air Force Office of Scientific Research (Grant FA9550-07-1-0075)United States. Army Research Office (Grant W911NF-08-1-0242
- …