592 research outputs found

    Noise correction on LANDSAT images using a spline-like algorithm

    Get PDF
    Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown

    TimeTrader: Exploiting Latency Tail to Save Datacenter Energy for On-line Data-Intensive Applications

    Get PDF
    Datacenters running on-line, data-intensive applications (OLDIs) consume significant amounts of energy. However, reducing their energy is challenging due to their tight response time requirements. A key aspect of OLDIs is that each user query goes to all or many of the nodes in the cluster, so that the overall time budget is dictated by the tail of the replies' latency distribution; replies see latency variations both in the network and compute. Previous work proposes to achieve load-proportional energy by slowing down the computation at lower datacenter loads based directly on response times (i.e., at lower loads, the proposal exploits the average slack in the time budget provisioned for the peak load). In contrast, we propose TimeTrader to reduce energy by exploiting the latency slack in the sub- critical replies which arrive before the deadline (e.g., 80% of replies are 3-4x faster than the tail). This slack is present at all loads and subsumes the previous work's load-related slack. While the previous work shifts the leaves' response time distribution to consume the slack at lower loads, TimeTrader reshapes the distribution at all loads by slowing down individual sub-critical nodes without increasing missed deadlines. TimeTrader exploits slack in both the network and compute budgets. Further, TimeTrader leverages Earliest Deadline First scheduling to largely decouple critical requests from the queuing delays of sub- critical requests which can then be slowed down without hurting critical requests. A combination of real-system measurements and at-scale simulations shows that without adding to missed deadlines, TimeTrader saves 15-19% and 41-49% energy at 90% and 30% loading, respectively, in a datacenter with 512 nodes, whereas previous work saves 0% and 31-37%.Comment: 13 page

    A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    Get PDF
    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented

    Securing Hadoop using OAuth 2.0 and Real Time Encryption Algorithm

    Get PDF
    Hadoop is most popularly used distributed programming framework for processing large amount of data with Hadoop distributed file system (HDFS), but processing personal or sensitive data on distributed environment demands secure computing. Originally Hadoop was designed without any security model. Hadoop projects deals with security of data as a top agenda, which in turn to represents classification of a critical data item. The data from various applications such as financial deemed to be sensitive which need to be secured. With the growing acceptance of Hadoop, there is an increasing trend to incorporate more and more enterprise security features. The encryption and decryption technique is used before writing or reading data from HDFS respectively. Advanced Encryption Standard (AES) enables protection of data at each cluster which performs encryption or decryption before read or writes occurs at HDFS. The earlier methods do not provide Data privacy due to the similar mechanism used to provide data security to all users at HDFS and also it increases the file size; so these are not suitable for real-time application. Hadoop require additional terminology to provide unique data security to all users and encrypt data with the compatible speed. We have implemented method in which OAuth does the authentication and provide unique authorization token for each user which is used in encryption technique that provide data privacy for all users of Hadoop. The Real Time encryption algorithms used for securing data in HDFS uses the key that is generated by using authorization token. DOI: 10.17762/ijritcc2321-8169.15071

    Seasonal Variation of Phytoplankton Diversity in Anchepalya Lake, Bengaluru Urban, India

    Get PDF
    Seasonal dynamics of Phytoplankton populations were studied in Anchepalya Lake for a period of one year from March 2013 to February 2014 covering three seasons. Phytoplankton sampling, collection, quantitative and qualitative population was performed using APHA (2005) Standard methods. The counting of plankton was done by using Sedgwick Rafter counting cell and phytoplankton were identified by using the identification manual on limnology by Adoni (1985). A total of 88 genera of phytoplanktons were recorded. The diversity of Phytoplankton number is of increase as in the order of Chlorophyceae >Bacillariophyceae > Cyanophyceae > Eugleanophyceae. The present study revealed that the lake water is polluted with the direct entry of sewage and effluent discharge

    PUMA: Purdue MapReduce Benchmarks Suite

    Get PDF

    SafeBet: Secure, Simple, and Fast Speculative Execution

    Full text link
    Spectre attacks exploit microprocessor speculative execution to read and transmit forbidden data outside the attacker's trust domain and sandbox. Recent hardware schemes allow potentially-unsafe speculative accesses but prevent the secret's transmission by delaying most access-dependent instructions even in the predominantly-common, no-attack case, which incurs performance loss and hardware complexity. Instead, we propose SafeBet which allows only, and does not delay most, safe accesses, achieving both security and high performance. SafeBet is based on the key observation that speculatively accessing a destination location is safe if the location's access by the same static trust domain has been committed previously; and potentially unsafe, otherwise. We extend this observation to handle inter trust-domain code and data interactions. SafeBet employs the Speculative Memory Access Control Table (SMACT) to track non-speculative trust domain code region-destination pairs. Disallowed accesses wait until reaching commit to trigger well-known replay, with virtually no change to the pipeline. Software simulations using SpecCPU benchmarks show that SafeBet uses an 8.3-KB SMACT per core to perform within 6% on average (63% at worst) of the unsafe baseline behind which NDA-restrictive, a previous scheme of security and hardware complexity comparable to SafeBet's, lags by 83% on average
    • …
    corecore