660 research outputs found

    Kenyamanan Termal Rumah Tepi Sungai, Studi Kasus Rumah Tepi Sungai Kahayan, Palangka Raya, Indonesia

    Full text link
                Early settlement in the town of Palangka Raya began when the Dayak tribes started to build their traditional houses. The traditional houses were constructed in the river bank or even on the river water. These houses consisted of two types: the stilt houses and the floating houses. Since the area in which the houses are built is in the warm and humid climate, with the daily average air temperature above 28oC and the relative humidity above 85%, there is a curiosity whether inhabitants live in these houses were comfortable. A recent thermal comfort study has been conducted in these houses to see whether people were thermally comfortable This paper discusses this study and draws some conclusion from it.   Keywords: thermal comfort, PMV, PPD, houses, riverside, Palangka Ray

    A Survey on Implementation of Homomorphic Encryption Scheme in Cloud based Medical Analytical System

    Get PDF
    The privacy of sensitive personal information is more and more important topic as a result of the increased availability of cloud services. These privacy issues arise due to the legitimate concern of a) having a security breach on these cloud servers or b) the leakage of this sensitive information due to an honest but curious individual at the cloud service provider. Standard encryption schemes try to address the ?rst concern by devising encryption schemes that are harder to break, yet they don’t solve the possible misuse of this sensitive data by the cloud service providers. Homomorphic encryption presents a tool that can solve both types of privacy concerns. The clients are given the possibility of encrypting their sensitive information before sending it to the cloud. The cloud will then compute over their encrypted data without the need for the decryption key. By using homomorphic encryption, servers guarantee to the clients that their valuable information to have no problems after being in a difficult situation.

    A Survey Paper on Secure Privacy Preserving Structure for Content Based Information Retrieval on Large Scale

    Get PDF
    It is very essential to protect personal confidential data that we share or search through web. Previously there are number of privacy preserving mechanism has been developed. Here we develop a new privacy protection framework for huge- content-based information retrieval. We are offering protection in two layers. Initially, robust hash values are taken as queries to avoid revealing of unique features or content. Then, the client has to select to skip some of the bits in a hash value for increasing the confusion for the server. Since we are reducing information it is not so easy for servers to know about interest of the client. The server needs to give back the hash values of all promising candidates to the client. The client will find the best match by searching in the candidate list. Because we are only sharing hash values between server and client the privacy of client and server will be protected. We begin the idea of tunable privacy, where we can adjust level of privacy protection according to the policy. We can realized it by hash based. It can be realized through piecewise inverted indexing based on hash. We have to divide extracted feature vector into pieces and index each and every piece with a value. Every value is linked with an inverted index list. The framework has been comprehensively tested with very huge image database. We have estimated both privacy-preserving performance and retrieval performance for those content recognition application. Couple of robust hash algorithm is being used. One is based on discrete wavelet transform; the other is based on the random projections. Both of these algorithms demonstrate acceptable recital in association with state-of-the-art retrieval schemes. We believe the bulk voting attack for guesstimate the query recognition and sort. Experiment results confirm that this attack is a peril when there are near-duplicates, but the success rate is depends upon the number of distinct item and omitted bits, success rate decrees when omitted bits are increased

    Vigorous Module Based Data Management

    Get PDF
    Data is important in today’s life and it must be saved using less amount of memory. Data is important in day to day life for many purposes, like Government activities, any organization needs their own database, hospitals, schools etc. It is necessary to save data into database as per the user’s query generation with less memory conjunction. One of the novel techniques we have developed for saving data into database by using file similarity algorithm. This technique is used to split the text file into number of paragraphs and save these paragraphs using appropriate reference number. These reference numbers are stored in database, whenever same paragraph will appeared in another text file it will check database and then save the other references of that file which are new for that file. This technique requires less memory and data can be stored in appropriate manner

    The Control of DFIG for MPPT and application of STATCOM for grid stability

    Get PDF
    Themodeling, analysis, control and simulation of DFIG based WECS along with STATCOM implementation to the grid is presented in this paper. To maintain DC bus voltage the GVO control method is adopted in GSC for reactive power is provided. The efficient control of active and reactive power is provided through SVO scheme in RSC. The MPPT is achieved by keeping TSR to the optimum value.In this project implementation of STATCOM to grid system is shown in order to improve voltage stability during the grid disturbances. The three phase symmetrical fault is created in the distribution system and STATCOM which is connected to distribution system regulates the voltage drop and overcurrent to normal condition. The developed system is simulated for different wind speeds and allows power distribution system to be in-service during faults. The hardware prototype consists of voltage source converter and its controls are developed to verify the results

    DL-Lite: Tractable Description Logics for Ontologies: A Survey

    Get PDF
    Description Logic, called DL-Lite, specially used to capture essential ontology languages, and keeping low difficulty of logic. Here logic means computing subsumption between concepts, and checking satisfiability of the whole knowledge base, as well as answer complex queries over the set of instances maintained in secondary storage. DL-Lite the usual DL logical tasks are polynomial in the amount of the TBox, and query answering is polynomial in the amount of the ABox (i.e., in data difficulty). To the best of knowledge, this is the first result of polynomial data difficulty for query answering over DL knowledge bases. A distinguished visage of logic is to allow for a partitions between TBox and ABox logic during query evaluation: the part of the process requiring TBox logic is self-determining of the ABox, and the some part of the process requiring access to the ABox which can be carried out by an SQL engine, thus taking benefit of the query optimization strategies provided by current DBMSs
    • …
    corecore