28,308 research outputs found

    Does Fundamental Indexation Lead to Better Risk Adjusted Returns? New Evidence from Australian Securities Exchange

    Get PDF
    Fundamental indexing based on accounting valuation has drawn significant interest from academics and practitioners in recent times as an alternative to capitalization weighted indexing based on market valuation. This paper investigates the claims of superiority of fundamental indexation strategy by using data for Australian Securities Exchange (ASX) listed stocks between 1985 and 2010. Not only do our results strongly support the outperformance claims observed in other geographical markets, we find that the excess returns from fundamental indexation in Australian market are actually much higher. The fundamental indexation strategy does underperform during strong bull markets although this effect diminishes with longer time horizons. On a rolling five years basis, the fundamental index always outperforms the capitalization-weighted index. Contrary to many previous studies, our results show that superior performance of fundamental indexation could not be attributed to value, size, or momentum effects. Overall, the findings indicate that fundamental indexation could offer potential outperformance of traditional indexation based on market capitalization even after adjusting for the former’s slightly higher turnover and transaction costs.

    Transaction processing core for accelerating software transactional memory

    Get PDF
    Submitted for review to MICRO-40 conference the 9th of June 2007This paper introduces an advanced hardware based approach for accelerating Software Transactional Memory (STM). The proposed solution focuses on speeding up conflict detection that grows polynomially with the number of concurrently running transactions and shared to transaction-local address resolution, which is the most frequent STM operation. This is achieved by logic split in two hardware units: Transaction Processing Core and Transactional Memory Look-Aside Buffer. The Transaction Processing Core is a separate hardware unit which does eager conflict detection and address resolution by indexing transactional objects based on their virtual addresses. The Transactional Memory Look-aside Buffer is a per-processor extension that caches the translated addresses by the Transaction Processing Core. The effect of its function is a reduced bus traffic and the time spent for communication between the CPUs and the Transaction Processing Core. Compared with other existing solutions, our approach mainly differs in proposing an implementation that is not based on the processor cache but a separate on-chip core, uses virtual addresses, does not require application modification and is further enhanced by Transactional Memory Look-Aside Buffer. Our experiments confirm the potential of the Transaction Processing Core to dramatically speed up STM systems.Postprint (published version

    Bitmap indexing a suitable approach for data warehouse design

    Get PDF
    Data warehouse is a collection of huge database which is subject oriented, integrated, time-variant and non volatile. As it is a set of huge database, fast data access is the major performance parameter of any data warehouse. Generally the information retrieved from Data Warehouse is summarized or aggregated as it is required for some decision making process of organization. To retrieve such a information queries to be fired is of the nature aggregation function followed by having clause. Extracting information efficiently from data warehouse is the challenge in front of researchers. As it is a huge database time required to access information is more compare to normal databases. Due to this creating index on this huge database is essential and it is important for increasing the performance of data warehouse .Selection of appropriate indexing decreases the query execution time and the performance of data warehouse is increase. Presently B-tree indexing is used in different database products. For creating the index on any table they uses B Tree approach. B tree indexing is useful for the databases where the frequent updates are required like On Line Transaction Processing system(OLTP). It is a time consuming approach for data warehouse and On Line Analytical System(OLAP). Data warehouse is not frequently updated so Bitmap indexing is appropriate choice for the same. We have to create bitmap index on required vector at the start only. Once it is created on fixed database we can use it any time for any query. As per the requirement of query we have to select bitmap and execute query. The bitmap indexing is appropriate choice for Data warehouse only because of its feature like it is non volatile and huge data set. DOI: 10.17762/ijritcc2321-8169.15025

    Instant restore after a media failure

    Full text link
    Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read/write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques

    The Mental Database

    Get PDF
    This article uses database, evolution and physics considerations to suggest how the mind stores and processes its data. Its innovations in its approach lie in:- A) The comparison between the capabilities of the mind to those of a modern relational database while conserving phenomenality. The strong functional similarity of the two systems leads to the conclusion that the mind may be profitably described as being a mental database. The need for material/mental bridging and addressing indexes is discussed. B) The consideration of what neural correlates of consciousness (NCC) between sensorimotor data and instrumented observation one can hope to obtain using current biophysics. It is deduced that what is seen using the various brain scanning methods reflects only that part of current activity transactions (e.g. visualizing) which update and interrogate the mind, but not the contents of the integrated mental database which constitutes the mind itself. This approach yields reasons why there is much neural activity in an area to which a conscious function is ascribed (e.g. the amygdala is associated with fear), yet there is no visible part of its activity which can be clearly identified as phenomenal. The concept is then situated in a Penrosian expanded physical environment, requiring evolutionary continuity, modularity and phenomenality.Several novel Darwinian advantages arising from the approach are described

    Using association rule mining to enrich semantic concepts for video retrieval

    Get PDF
    In order to achieve true content-based information retrieval on video we should analyse and index video with high-level semantic concepts in addition to using user-generated tags and structured metadata like title, date, etc. However the range of such high-level semantic concepts, detected either manually or automatically, usually limited compared to the richness of information content in video and the potential vocabulary of available concepts for indexing. Even though there is work to improve the performance of individual concept classifiers, we should strive to make the best use of whatever partial sets of semantic concept occurrences are available to us. We describe in this paper our method for using association rule mining to automatically enrich the representation of video content through a set of semantic concepts based on concept co-occurrence patterns. We describe our experiments on the TRECVid 2005 video corpus annotated with the 449 concepts of the LSCOM ontology. The evaluation of our results shows the usefulness of our approach
    corecore