135 research outputs found

    Specialization and Geographic Concentration of East Java Manufacturing Industries

    Get PDF
    The concentration of spatial economic activities, especially in manufacturing industries has become an interesting phenomenon to be analyzed. In manufacturing industries, spatial concentration is determined by wages, transportation cost, market access, and externalities which relate with localization economies and urbanization economies. The existence of spatial concentration has a relation with industrial specialization which based on industrial structure on that region. The objective of this paper is to describe where the concentration of East Java manufacturing industries is, how the locational distribution of that industries is, and how the relation between the spatial concentration and specialization and industrial structure in East Java is. This paper is using Location Quotient, Herfindahl Index, Elison-Glaeser Index, Krugman regional specialization index and Krugman bilateral index to analyze the data..Specialization, Concentration, Manufacturing

    Waltz - An exploratory visualization tool for volume data, using multiform abstract displays

    Get PDF
    Although, visualization is now widely used, misinterpretations still occur. There are three primary solutions intended to aid a user interpret data correctly. These are: displaying the data in different forms (Multiform visualization); simplifying (or abstracting) the structure of the viewed information; and linking objects and views together (allowing corresponding objects to be jointly manipulated and interrogated). These well-known visualization techniques, provide an emphasis towards the visualization display. We believe however that current visualization systems do not effectively utilise the display, for example, often placing it at the end of a long visualization process. Our visualization system, based on an adapted visualization model, allows a display method to be used throughout the visualization process, in which the user operates a 'Display (correlate) and Refine' visualization cycle. This display integration provides a useful exploration environment, where objects and Views may be directly manipulated; a set of 'portions of interest' can be selected to generate a specialized dataset. This may subsequently be further displayed, manipulated and filtered

    Perbandingan Diskriminan Kuadratik Klasik Dan Diskriminan Kuadratik Robust Pada Kasus Pengklasifikasian Peminatan Peserta Didik (Studi Kasus Di SMA Negeri 1 Kendal Tahun Ajaran 2014/2015)

    Full text link
    Discriminant is a multivariate statistical technique that can be used to perform the classification new observation into a particular group. Quadratic discriminant analysis tied to an assumption of normal multivariate distributed observations and variance covariance matrix inequality. Robust quadratic discriminant analysis can be used if the observations contain outliers. Classification using robust quadratic discriminant analysis with the Minimum Covariance Determinant (MCD) estimator in the data specialization students of SMA Negeri 1 Kendal that containing outliers gives the results of the classification accuracy of 95,06% with a percentage of 4,94% classification error while generating the classical quadratic discriminant analysis classification accuracy of 92,59% with a percentage of 7,41% classification error. Thus a robust quadratic discriminant analysis with the MCD estimator is more appropriate in the case of the data which contains outliers

    PERBANDINGAN DISKRIMINAN KUADRATIK KLASIK DAN DISKRIMINAN KUADRATIK ROBUST PADA KASUS PENGKLASIFIKASIAN PEMINATAN PESERTA DIDIK (Studi Kasus di SMA Negeri 1 Kendal Tahun Ajaran 2014/2015)

    Get PDF
    Discriminant is a multivariate statistical technique that can be used to perform the classification new observation into a particular group. Quadratic discriminant analysis tied to an assumption of normal multivariate distributed observations and variance covariance matrix inequality. Robust quadratic discriminant analysis can be used if the observations contain outliers. Classification using robust quadratic discriminant analysis with the Minimum Covariance Determinant (MCD) estimator in the data specialization students of SMA Negeri 1 Kendal that containing outliers gives the results of the classification accuracy of 95,06% with a percentage of 4,94% classification error while generating the classical quadratic discriminant analysis classification accuracy of 92,59% with a percentage of 7,41% classification error. Thus a robust quadratic discriminant analysis with the MCD estimator is more appropriate in the case of the data which contains outliers. Keywords : discriminant, outliers, robust, MCD estimators, classification

    Heuristic algorithm for interpretation of multi-valued attributes in similarity-based fuzzy relational databases

    Get PDF
    AbstractIn this work, we are presenting implementation details and extended scalability tests of the heuristic algorithm, which we had used in the past [1,2] to discover knowledge from multi-valued data entries stored in similarity-based fuzzy relational databases. The multi-valued symbolic descriptors, characterizing individual attributes of database records, are commonly used in similarity-based fuzzy databases to reflect uncertainty about the recorded observation. In this paper, we present an algorithm, which we developed to precisely interpret such non-atomic values and to transfer the fuzzy database tuples to the forms acceptable for many regular (i.e. atomic values based) data mining algorithms

    VXA: A Virtual Architecture for Durable Compressed Archives

    Full text link
    Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table

    Data Anonymization for Privacy Preservation in Big Data

    Get PDF
    Cloud computing provides capable ascendable IT edifice to provision numerous processing of a various big data applications in sectors such as healthcare and business. Mainly electronic health records data sets and in such applications generally contain privacy-sensitive data. The most popular technique for data privacy preservation is anonymizing the data through generalization. Proposal is to examine the issue against proximity privacy breaches for big data anonymization and try to recognize a scalable solution to this issue. Scalable clustering approach with two phase consisting of clustering algorithm and K-Anonymity scheme with Generalisation and suppression is intended to work on this problem. Design of the algorithms is done with MapReduce to increase high scalability by carrying out dataparallel execution in cloud. Wide-ranging researches on actual data sets substantiate that the method deliberately advances the competence of defensive proximity privacy breaks, the scalability and the efficiency of anonymization over existing methods. Anonymizing data sets through generalization to gratify some of the privacy attributes like k- Anonymity is a popularly-used type of privacy preserving methods. Currently, the gauge of data in numerous cloud surges extremely in agreement with the Big Data, making it a dare for frequently used tools to actually get, manage, and process large-scale data for a particular accepted time scale. Hence, it is a trial for prevailing anonymization approaches to attain privacy conservation for big data private information due to scalabilty issues

    Data Anonymization Using Map Reduce on Cloud based A Scalable Two-Phase Top-Down Specialization

    Get PDF
    A large number of cloud services require users to impart` private data like electronic health records for data analysis or Mining, bringing privacy concerns. Anonymizing information sets through generalization to fulfill certain security prerequisites, for example, k-anonymity is a broadly utilized classification of protection safeguarding procedures At present, the scale of information in numerous cloud applications increments immensely as per the Big Data pattern, in this manner making it a test for normally utilized programming instruments to catch, oversee, and process such substantial scale information inside a bearable slipped by time. As an issue, it is a test for existing anonymization methodologies to accomplish security protection on security touchy extensive scale information sets because of their inadequacy of adaptability. In this paper, we propose a versatile two-stage top-down specialization (TDS) methodology to anonymize huge scale information sets utilizing the Map reduce schema on cloud. Experimental evaluation results demonstrate that with our approach, the scalability and efficiency of TDS can be significantly improved over existing approaches

    On regional specialization of high- and low-tech industries

    Get PDF
    Industries have varying abilities to benefit from externalities associated with geographical concentration, and are also likely to suffer in different degrees from crowding costs. This makes industries differ in their concentration process. We hypothesize that firms with low education levels (low tech) tend to be concentrated in areas with low urban costs and small populations, and firms with high education levels (high tech) to be concentrated in areas with high urban costs and large populations. This is shown to be true with Finnish regional-industry data.
    • …
    corecore