1,173 research outputs found

    Omaha Intrametropolitan Locational Changes in Manufacturing: 1969 to 1987.

    Get PDF
    Studies of locational change in manufacturing at the metropolitan scale have attracted an increasing amount of research interest. The evolution and variety of theories of intrametropolitan location of American manufacturing has been documented in the literature. The primary objectives of this study are to; 1) examine locational changes in manufacturing within Omaha metropolitan area between 1969-1987, and; 2) assess Omaha’s industrial change pattern as to how it fits into the theoretical pattern as established in the literature. Through the technique of devising a three-zone spatial base across the metropolitan area, it was determined that manufacturing employment in the downtown or inner area has declined relatively to the suburban zone. Manufacturing in the suburban area has performed better by growing faster than manufacturing in the city center. Land zoned for industrial use in downtown Omaha, and industrial parks developed with accessibility to interstate systems were the major factors for present distribution of industrial firms. The suburban zone in the Omaha SMSA appears to have greater potential for increased industrial development. Omaha may very well continue to develop in a way as predicted in the models of urban manufacturing change. However, at present, Omaha has just began the suburbanization phase of manufacturing, unlike most cities over the U. S. as studied in the literature

    Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning

    Full text link
    We present a developmental framework based on a long-term memory and reasoning mechanisms (Vision Similarity and Bayesian Optimisation). This architecture allows a robot to optimize autonomously hyper-parameters that need to be tuned from any action and/or vision module, treated as a black-box. The learning can take advantage of past experiences (stored in the episodic and procedural memories) in order to warm-start the exploration using a set of hyper-parameters previously optimized from objects similar to the new unknown one (stored in a semantic memory). As example, the system has been used to optimized 9 continuous hyper-parameters of a professional software (Kamido) both in simulation and with a real robot (industrial robotic arm Fanuc) with a total of 13 different objects. The robot is able to find a good object-specific optimization in 68 (simulation) or 40 (real) trials. In simulation, we demonstrate the benefit of the transfer learning based on visual similarity, as opposed to an amnesic learning (i.e. learning from scratch all the time). Moreover, with the real robot, we show that the method consistently outperforms the manual optimization from an expert with less than 2 hours of training time to achieve more than 88% of success

    Bounded transaction model checking

    Get PDF
    technical reportIndustrial cache coherence protocol models often have too many reachable states, preventing full reachability analysis even for small model instances (number of processors, addresses, etc.). Several partial search debugging methods are, therefore, employed, including lossy state compression using hash compaction, and bounded model checking (BMC, or equivalently, depth-bounded search). We show that instead of a BMC approach, a bounded transaction approach is much more effective for debugging. This is because of the fact that the basic unit of activity in a cache coherence protocol is that of a transaction - e.g., a complete causal cycle of actions beginning with a node making a request for a line and obtaining the line. The reduced effectiveness of BMC mainly stems from the fact that by limiting only the search depth, it cannot be guaranteed that complete transactions get selected, or that the right kind maximal number of interacting transactions. Thus, instead of bounded model-checking, which explores all possible interleavings in BFS, we propose a bounded transaction model-checking approach for debugging cache coherence protocols, where the criterion is to allow a certain number of transactions chosen from a set of potentially interfering set of transactions, to be explored. We have built a bounded transaction version for the Murphi model checker and shown that it can find seeded bugs in protocols far more effectively, especially when full BFS runs out of memory and misses these bugs. We compare our work with similar ideas - such as debugging communicating push-down systems[1] by bounding the number of interleavings (a similar idea, but different in detail)

    A general compositional approach to verifying hierarchical cache coherence protocols

    Get PDF
    technical reportModern chip multiprocessor (CMP) cache coherence protocols are extremely complex and error prone to design. Modern symbolic methods are unable to provide much leverage for this class of examples. In [1], we presented a method to verify hierarchical and inclusive versions of these protocols using explicit state enumeration tools. We circumvented state explosion by employing a meta-circular assume/guarantee technique in which a designer can model check abstracted versions of the original protocol and claim that the real protocol is correct. The abstractions were justified in the same framework (hence the meta-circular approach). In this paper, we present how our work can be extended to hierarchical non-inclusive protocols which are inherently much harder to verify, both from the point of having more corner cases, and having insufficient information in higher levels of the protocol hierarchy to imply the sharing states of cache lines at lower levels. Two methods are proposed. The first requires more manual effort, but allows our technique in [1] to be applied unchanged, barring a guard strengthening expression that is computed based on state residing outside the cluster being abstracted. The second requires less manual effort, can scale to deeper hierarchies of protocol implementations, and uses history variables which are computed much more modularly. This method also relies on the meta-circular definition framework. A non-inclusive protocol that could not be completely model checked even after visiting 1.5 billion states was verified using two model checks of roughly 0.25 billion states each

    Interpretation of Movie Posters from the Perspective of Multimodal Discourse Analysis

    Get PDF
    Given the remarkable development of multimediaand computer technology in the information age, the previousdominant role of language in mass media and communication ischallenged by other semiotic resources such as image, sound andaction. Accordingly, new grammars must be formulated to give acomprehensive account of the integrative meaning generated bythe interaction of different modalities in discourse. The theory ofmultimodal discourse analysis (MDA), which is theoreticallybased on Systemic-Functional Linguistics, solves the problem tomost degree. In the light of the grammar of visual design byKress and van Leeuwen, this paper intends to formulate a modelfor MDA of movie posters. A qualitative and interpretativeapproach is used to hold an in-depth discussion which helps totestify the feasibility of this model and also to point out the key tothe application of this model. The present study may not onlyenlarge the application area of Systemic-Functional Linguistics,but also fill in a gap in discourse analysis of movie posters

    An interface aware guided search method for error-trace justification in large protocols

    Get PDF
    technical reportMany complex concurrent protocols that cannot be formally verified due to state explosion can often be formally verified by initially creating a collection of abstractions (overapproximations), and subsequently refining the overapproximated protocol in response to spurious counterexample traces. Such an approach crucially depends on the ability to check whether a given error trace in the abstract protocol corresponds to a concrete trace in the original protocol. Unfortunately, this checking step alone can be as as hard verifying the original protocol directly without abstractions, which is infeasible. Our approach tracks the interface behavior at the interfaces erected by our abstractions, and employs a few heuristic search methods based on a classification of the abstract system generating these traces. This collection of heuristic search methods form a tailor-made guided search strategy that works very efficiently in practice on three realistic multicore hierarchical cache coherence protocols. It could correctly analyze ?? ?? spurious error traces and genuine error scenarios, each within seconds. Also, on ?? of the ?? ?? of the spurious errors, our approach can precisely report which transition in the abstract protocol is overly approximated that leads to the spurious error

    Predicate abstraction for Murphi

    Get PDF
    technical reportPredicate abstraction is a technique used to prove properties in a finite or infinite state system. It employs decision procedures to abstract a concrete state system into a finite state abstraction system, which will then be model checked and refined. In this paper, we present an approach for implementing predicate abstraction for Murphi[1] using CVC Lite[2]. Two cases for each property(i.e. SAT and UnSAT), are tried in model checking. When a fixed point is reached finally, the validity of each property is declared. We applied our tool(called PAM) on the FLASH[3] and German[4] protocols. The preliminary result on these protocols is encouraging

    Personalized Video Recommendation Using Rich Contents from Videos

    Full text link
    Video recommendation has become an essential way of helping people explore the massive videos and discover the ones that may be of interest to them. In the existing video recommender systems, the models make the recommendations based on the user-video interactions and single specific content features. When the specific content features are unavailable, the performance of the existing models will seriously deteriorate. Inspired by the fact that rich contents (e.g., text, audio, motion, and so on) exist in videos, in this paper, we explore how to use these rich contents to overcome the limitations caused by the unavailability of the specific ones. Specifically, we propose a novel general framework that incorporates arbitrary single content feature with user-video interactions, named as collaborative embedding regression (CER) model, to make effective video recommendation in both in-matrix and out-of-matrix scenarios. Our extensive experiments on two real-world large-scale datasets show that CER beats the existing recommender models with any single content feature and is more time efficient. In addition, we propose a priority-based late fusion (PRI) method to gain the benefit brought by the integrating the multiple content features. The corresponding experiment shows that PRI brings real performance improvement to the baseline and outperforms the existing fusion methods
    • …
    corecore