51,429 research outputs found
Recall Latencies, Confidence, and Output Positions of True and False Memories: Implications for Recall and Metamemory Theories
Recall latency, recall accuracy rate, and recall confidence were examined in free recall as a function of recall output serial position using a modified Deese-Roediger-McDermott paradigm to test a strength-based theory against the dual-retrieval process theory of recall output sequence. The strength theory predicts the item output sequence to be in the descending order of memory strength. The dual-retrieval process theory postulates two phases in a free recall, a first direct access phase in which items are output verbatim in the weakest-to-strongest order (cognitive triage) and a second reconstructive phase in which reconstructed items are output in the strongest-to-weakest order. In four experiments, all three indicators of memory strength (latency, accuracy, and confidence) consistently showed a descending-strength order of recall both for true and false memories. Additionally, false memory was found to be output in two phases and subjects\u27 confidence judgment of their own memory to be unaccountable by retrieval fluency (recall latency)
Graph Based Reduction of Program Verification Conditions
Increasing the automaticity of proofs in deductive verification of C programs
is a challenging task. When applied to industrial C programs known heuristics
to generate simpler verification conditions are not efficient enough. This is
mainly due to their size and a high number of irrelevant hypotheses. This work
presents a strategy to reduce program verification conditions by selecting
their relevant hypotheses. The relevance of a hypothesis is determined by the
combination of a syntactic analysis and two graph traversals. The first graph
is labeled by constants and the second one by the predicates in the axioms. The
approach is applied on a benchmark arising in industrial program verification
Algorithms For Extracting Timeliness Graphs
We consider asynchronous message-passing systems in which some links are
timely and processes may crash. Each run defines a timeliness graph among
correct processes: (p; q) is an edge of the timeliness graph if the link from p
to q is timely (that is, there is bound on communication delays from p to q).
The main goal of this paper is to approximate this timeliness graph by graphs
having some properties (such as being trees, rings, ...). Given a family S of
graphs, for runs such that the timeliness graph contains at least one graph in
S then using an extraction algorithm, each correct process has to converge to
the same graph in S that is, in a precise sense, an approximation of the
timeliness graph of the run. For example, if the timeliness graph contains a
ring, then using an extraction algorithm, all correct processes eventually
converge to the same ring and in this ring all nodes will be correct processes
and all links will be timely. We first present a general extraction algorithm
and then a more specific extraction algorithm that is communication efficient
(i.e., eventually all the messages of the extraction algorithm use only links
of the extracted graph)
Formal change impact analyses for emulated control software
Processor emulators are a software tool for allowing legacy computer programs to be executed on a modern processor. In the past emulators have been used in trivial applications such as maintenance of video games. Now, however, processor emulation is being applied to safety-critical control systems, including military avionics. These applications demand utmost guarantees of correctness, but no verification techniques exist for proving that an emulated system preserves the original systemâs functional and timing properties. Here we show how this can be done by combining concepts previously used for reasoning about real-time program compilation, coupled with an understanding of the new and old software architectures. In particular, we show how both the old and new systems can be given a common semantics, thus allowing their behaviours to be compared directly
The Weakest Failure Detector for Eventual Consistency
In its classical form, a consistent replicated service requires all replicas
to witness the same evolution of the service state. Assuming a message-passing
environment with a majority of correct processes, the necessary and sufficient
information about failures for implementing a general state machine replication
scheme ensuring consistency is captured by the {\Omega} failure detector. This
paper shows that in such a message-passing environment, {\Omega} is also the
weakest failure detector to implement an eventually consistent replicated
service, where replicas are expected to agree on the evolution of the service
state only after some (a priori unknown) time. In fact, we show that {\Omega}
is the weakest to implement eventual consistency in any message-passing
environment, i.e., under any assumption on when and where failures might occur.
Ensuring (strong) consistency in any environment requires, in addition to
{\Omega}, the quorum failure detector {\Sigma}. Our paper thus captures, for
the first time, an exact computational difference be- tween building a
replicated state machine that ensures consistency and one that only ensures
eventual consistency
TOP OF MIND AWARENESS (TOMA) STRATEGY FOR HYPERMARKET âXâ IN SURABAYA
Surabaya as one of the biggest cities in Indonesia has great demands for retail industry. There
are 5 hypermarkets in Surabaya, with 3 of them dominate 88.5% total revenue in 2009. This competition
has causes every hypermarket to rearrange and refresh their marketing strategies in order to win the
competition. The questions that the hypermarket Xâs manager has to answer are mostly about these 3
questions. The first one is âDoes my brand is the first retrieved brand?â. Using Top-of-Mind Awareness
concept, a self-administered questionnaire was used to gain the data from 150 respondents, with 138 were
completely filled. The result is Carrefour leads with 43.48%, meanwhile Hypermarket X only gains
20.29%. The second one is âDo consumers purchase at the first retrieved brand?â The Chi-square test with
significant level below 0.05 proves that there is a relationship between TOMA Hypermarket and future
purchased hypermarket. And the last question is âHow my hypermarket will be able to be the first retrieved
brand?â Paired t-test, Chi-square, Discriminant, and Quadrant analysis will explain it. Hypermarket X needs
to maintain their âempathy', improve their âreliabilityâ and âprice guaranteeâ by âadvertisementâ, in order to
win the competition
- âŠ