105,140 research outputs found

    Fuzzy linear programs with optimal tolerance levels

    Get PDF
    It is usually supposed that tolerance levels are determined by the decision maker a priori in a fuzzy linear program (FLP). In this paper we shall suppose that the decision maker does not care about the particular values of tolerance levels, but he wishes to minimize their weighted sum. This is a new statement of FLP, because here the tolerance levels are also treated as variables

    Cognitive Styles and Adaptive Web-based Learning

    Get PDF
    Adaptive hypermedia techniques have been widely used in web-based learning programs. Traditionally these programs have focused on adapting to the user’s prior knowledge, but recent research has begun to consider adapting to cognitive style. This study aims to determine whether offering adapted interfaces tailored to the user’s cognitive style would improve their learning performance and perceptions. The findings indicate that adapting interfaces based on cognitive styles cannot facilitate learning, but mismatching interfaces may cause problems for learners. The results also suggest that creating an interface that caters for different cognitive styles and gives a selection of navigational tools might be more beneficial for learners. The implications of these findings for the design of web-based learning programs are discussed

    Energy Disaggregation for Real-Time Building Flexibility Detection

    Get PDF
    Energy is a limited resource which has to be managed wisely, taking into account both supply-demand matching and capacity constraints in the distribution grid. One aspect of the smart energy management at the building level is given by the problem of real-time detection of flexible demand available. In this paper we propose the use of energy disaggregation techniques to perform this task. Firstly, we investigate the use of existing classification methods to perform energy disaggregation. A comparison is performed between four classifiers, namely Naive Bayes, k-Nearest Neighbors, Support Vector Machine and AdaBoost. Secondly, we propose the use of Restricted Boltzmann Machine to automatically perform feature extraction. The extracted features are then used as inputs to the four classifiers and consequently shown to improve their accuracy. The efficiency of our approach is demonstrated on a real database consisting of detailed appliance-level measurements with high temporal resolution, which has been used for energy disaggregation in previous studies, namely the REDD. The results show robustness and good generalization capabilities to newly presented buildings with at least 96% accuracy.Comment: To appear in IEEE PES General Meeting, 2016, Boston, US

    On the automated extraction of regression knowledge from databases

    Get PDF
    The advent of inexpensive, powerful computing systems, together with the increasing amount of available data, conforms one of the greatest challenges for next-century information science. Since it is apparent that much future analysis will be done automatically, a good deal of attention has been paid recently to the implementation of ideas and/or the adaptation of systems originally developed in machine learning and other computer science areas. This interest seems to stem from both the suspicion that traditional techniques are not well-suited for large-scale automation and the success of new algorithmic concepts in difficult optimization problems. In this paper, I discuss a number of issues concerning the automated extraction of regression knowledge from databases. By regression knowledge is meant quantitative knowledge about the relationship between a vector of predictors or independent variables (x) and a scalar response or dependent variable (y). A number of difficulties found in some well-known tools are pointed out, and a flexible framework avoiding many such difficulties is described and advocated. Basic features of a new tool pursuing this direction are reviewed

    Scientific Computing Meets Big Data Technology: An Astronomy Use Case

    Full text link
    Scientific analyses commonly compose multiple single-process programs into a dataflow. An end-to-end dataflow of single-process programs is known as a many-task application. Typically, tools from the HPC software stack are used to parallelize these analyses. In this work, we investigate an alternate approach that uses Apache Spark -- a modern big data platform -- to parallelize many-task applications. We present Kira, a flexible and distributed astronomy image processing toolkit using Apache Spark. We then use the Kira toolkit to implement a Source Extractor application for astronomy images, called Kira SE. With Kira SE as the use case, we study the programming flexibility, dataflow richness, scheduling capacity and performance of Apache Spark running on the EC2 cloud. By exploiting data locality, Kira SE achieves a 2.5x speedup over an equivalent C program when analyzing a 1TB dataset using 512 cores on the Amazon EC2 cloud. Furthermore, we show that by leveraging software originally designed for big data infrastructure, Kira SE achieves competitive performance to the C implementation running on the NERSC Edison supercomputer. Our experience with Kira indicates that emerging Big Data platforms such as Apache Spark are a performant alternative for many-task scientific applications

    FARMING IN THE EASTERN AMAZON-POOR BUT ALLOCATIVELY EFFICIENT

    Get PDF
    This research empirically investigates the well known 'poor-but-efficient' hypothesis formulated by Schultz (1964) assuming that small scale farmers in developing countries are reasonably efficient in allocating their scarce resources by responding positively to price incentives. Deviating from Schultz it is assumed here that scale effects explain a considerable proportion of small scale farmers' relative efficiency. The theoretical underpinnings of the scale efficiency concept are briefly reviewed before a normalized generalized Leontief profit function is modeled by using its output supply and input demand system to capture the joint production of cassava flour and maize by a sample of small scale farmers in the Bragantina region of the Eastern Amazon, Brazil. The discussion on theoretical consistency and functional flexibility is considered by imposing convexity on the GL profit framework. The empirical results confirm our revised hypothesis that small farmers in traditional development settings are 'poor-but-allocatively efficient' by clearly suggesting considerable inefficiency with respect to the scale of operations.Efficiency, Joint Production, Small Scale Farming, Schultz Hypothesis, Farm Management,

    A programing system for research and applications in structural optimization

    Get PDF
    The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of constraints and design variables. Features shown in numerical examples include: variability of structural layout and overall shape geometry, static strength and stiffness constraints, local buckling failure, and vibration constraints

    Labour use and its adjustment in Indian manufacturing industries

    Get PDF
    This study provides an empirical investigation of the adjustment process of labour in Indian manufacturing industries, which evolved through structural transformation in the era of globalization. The analysis is based on a dynamic model applied to a panel of 22 two-digit manufacturing industries for a time period of 22 years covering 1980/81 to 2001/02. We assume that as competition increases industries adjust their employment to a desired level which is both industry and time specific. The results indicate that manufacturing sector has shown considerable dynamism in adjusting its workforce. The long run labour demand responds greatest to the output, followed by capital and least by wages. It is observed that Indian manufacturing is not inefficient in labour use as modest speed of adjustment has led employment size at near the optimal level.Labor and Human Capital,

    Integrating diversity management initiatives with strategic human resource management

    Get PDF
    Managing diversity is usually viewed in broad conceptual terms as recognising and valuing differences among people; it is directed towards achieving organisational outcomes and reflects management practices adopted to improve the effectiveness of people management in organisations (Kramar 2001; Erwee, Palamara & Maguire 2000). The purpose of the chapter is to examine the debate on how diversity management initiatives can be integrated with strategic human resource management (SHRM), and how SHRM is linked to organisational strategy. Part of this debate considers to what extent processes associated with managing diversity are an integral part of the strategic vision of management. However, there is no consensus on how a corporate strategic plan influences or is influenced by SHRM, and how the latter integrates diversity management as a key component. The first section of the chapter addresses the controversy about organisations as linear, steady state entities or as dynamic, complex and fluid entities. This controversy fuels debate in the subsequent sections about the impact that such paradigms have on approaches to SHRM. The discussion on SHRM in this chapter will explore its links to corporate strategy as well as to diversity management. Subsequent sections propose that managing diversity should address sensitive topics such as gender, race and ethnicity. Finally, attention is given to whether an integrative approach to SHRM can be achieved and how to overcome the obstacles to making this a reality
    • …
    corecore