301 research outputs found
Fossil evidence for spin alignment of SDSS galaxies in filaments
We search for and find fossil evidence that the distribution of the spin axes
of galaxies in cosmic web filaments relative to their host filaments are not
randomly distributed. This would indicate that the action of large scale tidal
torques effected the alignments of galaxies located in cosmic filaments. To
this end, we constructed a catalogue of clean filaments containing edge-on
galaxies. We started by applying the Multiscale Morphology Filter (MMF)
technique to the galaxies in a redshift-distortion corrected version of the
Sloan Digital Sky Survey DR5. From that sample we extracted those 426 filaments
that contained edge-on galaxies (b/a < 0.2). These filaments were then visually
classified relative to a variety of quality criteria. Statistical analysis
using "feature measures" indicates that the distribution of orientations of
these edge-on galaxies relative to their parent filament deviate significantly
from what would be expected on the basis of a random distribution of
orientations. The interpretation of this result may not be immediately
apparent, but it is easy to identify a population of 14 objects whose spin axes
are aligned perpendicular to the spine of the parent filament (\cos \theta <
0.2). The candidate objects are found in relatively less dense filaments. This
might be expected since galaxies in such locations suffer less interaction with
surrounding galaxies, and consequently better preserve their tidally induced
orientations relative to the parent filament. The technique of searching for
fossil evidence of alignment yields relatively few candidate objects, but it
does not suffer from the dilution effects inherent in correlation analysis of
large samples.Comment: 20 pages, 19 figures, slightly revised and upgraded version, accepted
for publication by MNRAS. For high-res version see
http://www.astro.rug.nl/~weygaert/SpinAlignJones.rev.pd
Comparing two appraisal models of interest
"Interest is an emotion associated with curiosity, exploration, and knowledge-seeking (Fredrickson, 1998; Izard, 1977; Silvia, 2005a, 2005b, 2006; Tomkins, 1962). The first researchers to propose an appraisal structure of interest were Smith and Ellsworth (1985). An alternative appraisal structure of interest was proposed by Silvia (2005a, 2005b). Experiment 1 tested these competing models. Participants viewed copies of calming and disturbing classical and contemporary paintings, rated each picture for appraisals, and reported their experienced interest, pleasantness/enjoyment, and disturbingness. Experiment 2 aimed to replicate the appraisal structures for the emotion of interest and measured viewing time. Results showed (1) interest and pleasantness were unrelated; (2) novelty-complexity positively predicted interest; (3) disturbing pictures were highly interesting; (4) and viewing time positively predicted interest."--Abstract from author supplied metadata
Novel Rule Base Development from IED-Resident Big Data for Protective Relay Analysis Expert System
Many Expert Systems for intelligent electronic device (IED) performance analyses such as those for protective relays have been developed to ascertain operations, maximize availability, and subsequently minimize misoperation risks. However, manual handling of overwhelming volume of relay resident big data and heavy dependence on the protection experts’ contrasting knowledge and inundating relay manuals have hindered the maintenance of the Expert Systems. Thus, the objective of this chapter is to study the design of an Expert System called Protective Relay Analysis System (PRAY), which is imbedded with a rule base construction module. This module is to provide the facility of intelligently maintaining the knowledge base of PRAY through the prior discovery of relay operations (association) rules from a novel integrated data mining approach of Rough-Set-Genetic-Algorithm-based rule discovery and Rule Quality Measure. The developed PRAY runs its relay analysis by, first, validating whether a protective relay under test operates correctly as expected by way of comparison between hypothesized and actual relay behavior. In the case of relay maloperations or misoperations, it diagnoses presented symptoms by identifying their causes. This study illustrates how, with the prior hybrid-data-mining-based knowledge base maintenance of an Expert System, regular and rigorous analyses of protective relay performances carried out by power utility entities can be conveniently achieved
Recommended from our members
Boredom, Information-Seeking and Exploration
Any adaptive organism faces the choice between taking
actions with known benefits (exploitation), and sampling new
actions to check for other, more valuable opportunities
available (exploration). The latter involves information-
seeking, a drive so fundamental to learning and long-term
reward that it can reasonably be considered, through evolution
or development, to have acquired its own value, independent
of immediate reward. Similarly, behaviors that fail to yield
information may have come to be associated with aversive
experiences such as boredom, demotivation, and task
disengagement. In accord with these suppositions, we propose
that boredom reflects an adaptive signal for managing the
exploration-exploitation tradeoff, in the service of optimizing
information acquisition and long-term reward. We tested
participants in three experiments, manipulating the
information content in their immediate task environment, and
showed that increased perceptions of boredom arise in
environments in which there is little useful information, and
that higher boredom correlates with higher exploration. These
findings are the first step toward a model formalizing the
relationship between exploration, exploitation and boredom
Differentiable Neural Computers with Memory Demon
A Differentiable Neural Computer (DNC) is a neural network with an external
memory which allows for iterative content modification via read, write and
delete operations.
We show that information theoretic properties of the memory contents play an
important role in the performance of such architectures. We introduce a novel
concept of memory demon to DNC architectures which modifies the memory contents
implicitly via additive input encoding. The goal of the memory demon is to
maximize the expected sum of mutual information of the consecutive external
memory contents.Comment: NeurIPS 2022 Workshop On Memory in Artificial and Real Intelligenc
Recommended from our members
The effect of negative feedback on motivation : a meta-analytic investigation
textAlthough the most prominent view in psychological theory has been that negative feedback should generally have a detrimental impact on motivation, competing perspectives and caveats on this prominent view have suggested that negative feedback may sometimes have neutral or even positive effects on motivation. A meta-analysis of 79 studies examined the effect of negative feedback on motivation and related outcomes with both child and adult samples. Results indicated that negative feedback compared to positive feedback decreased intrinsic motivation and perceived competence. This effect is much smaller when compared to neutral or no feedback. Moderator tests revealed that the effect of negative feedback seems to be less demotivating when a) the feedback statement includes instructional details to improve, b) compared to objective versus normative standards, and c) the task is interesting. Implications for future research and applications to real-world settings are discussed.Educational Psycholog
The Bee\u27s Knees or Spines of a Spider: What Makes an \u27Insect\u27 Interesting?
Insects and their kin (bugs) are among the most detested and despised creatures on earth. Irrational fears of these mostly harmless organisms often restrict and prevent opportunities for outdoor recreation and leisure. Alternatively, Shipley and Bixler (2016) theorize that direct and positive experiences with bugs during middle childhood may result in fascination with insects leading to comfort in wildland settings. The objective of this research was to examine and identify the novel and unfamiliar bug types that people are more likely to find interesting and visually attend to when spontaneously presented with their images. This research examined these questions through four integrated exploratory studies. The first study (n = 216) found that a majority of adults are unfamiliar with a majority of bugs, despite the abundance of many common but ˜unfamiliar\u27 bugs. The second (n = 15) and third (n = 308) study examined participant\u27s first impressions of unfamiliar bugs. The second study consisted of in-depth interviews, while the third study had participants report their perceptions of bugs across multiple emotional dimensions. Together, both studies suggest there are many unfamiliar bugs that are perceptually novel and perceived as interesting when encountered. The fourth study (n = 48) collected metrics of visual attention using eye-tracking by measuring visual fixations while participants viewed different bugs identified through previous studies as either being interesting or disinteresting. The findings of the fourth study suggest that interesting bugs can capture more visual attention than uninteresting bugs. Results from all four studies provide a heuristic for interpretive naturalists, magazine editors, marketers, public relation advisors, filmmakers, and any other visual communication professional that can be used in the choice of images of unfamiliar images of insects and other small invertebrates to evoke situational interest and motivate subsequent behavior
AnICA: Analyzing Inconsistencies in Microarchitectural Code Analyzers
Microarchitectural code analyzers, i.e., tools that estimate the throughput
of machine code basic blocks, are important utensils in the tool belt of
performance engineers. Recent tools like llvm-mca, uiCA, and Ithemal use a
variety of techniques and different models for their throughput predictions.
When put to the test, it is common to see these state-of-the-art tools give
very different results. These inconsistencies are either errors, or they point
to different and rarely documented assumptions made by the tool designers.
In this paper, we present AnICA, a tool taking inspiration from differential
testing and abstract interpretation to systematically analyze inconsistencies
among these code analyzers. Our evaluation shows that AnICA can summarize
thousands of inconsistencies in a few dozen descriptions that directly lead to
high-level insights into the different behavior of the tools. In several case
studies, we further demonstrate how AnICA automatically finds and characterizes
known and unknown bugs in llvm-mca, as well as a quirk in AMD's Zen
microarchitectures.Comment: To appear in Proceedings of the ACM on Programming Languages
(PACMPL), Vol. 6, No. OOPSLA
- …