987 research outputs found

    Prediction of topsoil properties at field-scale by using C-band SAR data

    Get PDF
    Designing and validating digital soil mapping (DSM) techniques can facilitate precision agriculture implementation. This study generates and validates a technique for the spatial prediction of soil properties based on C-band radar data. To this end, (i) we focused on working at farm-field scale and conditions, a fact scarcely reported; (ii) we validated the usefulness of Random Forest regression (RF) to predict soil properties based on C-band radar data; (iii) we validated the prediction accuracy of C-band radar data according to the coverage condition (for example: crop or fallow); and (iv) we aimed to find spatial relationship between soil apparent electrical conductivity and C-band radar. The experiment was conducted on two agricultural fields in the southern Argentine Pampas. Fifty one Sentinel 1 Level-1 GRD (Grid) products of C-band frequency (5.36 GHz) were processed. VH and VV polarizations and the dual polarization SAR vegetation index (DPSVI) were estimated. Soil information was obtained through regular-grid sample scheme and apparent soil electrical conductivity (ECa) measurements. Soil properties predicted were: texture, effective soil depth, ECa at 0-0.3m depth and ECa at 0-0.9m depth. The effect of water, vegetation and soil on the depolarization from SAR backscattering was analyzed. Complementary, spatial predictions of all soil properties from ordinary cokriging and Conditioned Latin hypercube sampling (cLHS) were evaluated using six different soil sample sizes: 20, 40, 60, 80, 100 and the total of the grid sampling scheme. The results demonstrate that the prediction accuracy of C-band SAR data for most of the soil properties evaluated varies considerably and is closely dependent on the coverage type and weather dynamics. The polarizations with high prediction accuracy of all soil properties showed low values of σVVo and σVHo, while those with low prediction accuracy showed high values of σVVo and low values of σVHo. The spatial patterns among maps of all soil properties using all samples and all sample sizes were similar. In conditions when summer crops demand large amount of water and there is soil water deficit backscattering showed higher prediction accuracy for most soil properties. During the fallow season, the prediction accuracy decreased and the spatial prediction accuracy was closely dependent on the number of validation samples. The findings of this study corroborates that DSM techniques at field scale can be achieved by using C-band SAR data. Extrapolation y applicability of this study to other areas remain to be tested.EEA BalcarceFil: Domenech, Marisa. Universidad Nacional del Sur. Departamento de Agronomía; Argentina.Fil: Amiottia, Nilda. Universidad Nacional del Sur. Departamento de Agronomía; Argentina.Fil: Amiottia, Nilda. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina.Fil: Costa, José Luis. Instituto Nacional de Tecnología Agropecuaria (INTA). Estación Experimental Agropecuaria Balcarce; Argentina.Fil: Castro-Franco, Mauricio. Centro de Investigaciones de la Caña de Azúcar de Colombia. Estación Experimental Estación Experimental vía Cali-Florida; Colombia

    East Lancashire Research 2008

    Get PDF
    East Lancashire Research 200

    An Autonomic Cross-Platform Operating Environment for On-Demand Internet Computing

    Get PDF
    The Internet has evolved into a global and ubiquitous communication medium interconnecting powerful application servers, diverse desktop computers and mobile notebooks. Along with recent developments in computer technology, such as the convergence of computing and communication devices, the way how people use computers and the Internet has changed people´s working habits and has led to new application scenarios. On the one hand, pervasive computing, ubiquitous computing and nomadic computing become more and more important since different computing devices like PDAs and notebooks may be used concurrently and alternately, e.g. while the user is on the move. On the other hand, the ubiquitous availability and pervasive interconnection of computing systems have fostered various trends towards the dynamic utilization and spontaneous collaboration of available remote computing resources, which are addressed by approaches like utility computing, grid computing, cloud computing and public computing. From a general point of view, the common objective of this development is the use of Internet applications on demand, i.e. applications that are not installed in advance by a platform administrator but are dynamically deployed and run as they are requested by the application user. The heterogeneous and unmanaged nature of the Internet represents a major challenge for the on demand use of custom Internet applications across heterogeneous hardware platforms, operating systems and network environments. Promising remedies are autonomic computing systems that are supposed to maintain themselves without particular user or application intervention. In this thesis, an Autonomic Cross-Platform Operating Environment (ACOE) is presented that supports On Demand Internet Computing (ODIC), such as dynamic application composition and ad hoc execution migration. The approach is based on an integration middleware called crossware that does not replace existing middleware but operates as a self-managing mediator between diverse application requirements and heterogeneous platform configurations. A Java implementation of the Crossware Development Kit (XDK) is presented, followed by the description of the On Demand Internet Computing System (ODIX). The feasibility of the approach is shown by the implementation of an Internet Application Workbench, an Internet Application Factory and an Internet Peer Federation. They illustrate the use of ODIX to support local, remote and distributed ODIC, respectively. Finally, the suitability of the approach is discussed with respect to the support of ODIC

    Technological Impediments to B2C Electronic Commerce: An Update

    Get PDF
    In 1999, Rose et al. identified six categories of technological impediments inhibiting the growth of electronic commerce: (1) download delays, (2) interface limitations, (3) search problems, (4) inadequate measures of Web application success, (5) security, and (6) a lack of Internet standards. This paper updates findings in the original paper by surveying the practitioner literature for the five-year period from June 1999 to June 2004. We identify how advances in technology both partially resolve concerns with the original technological impediments, and inhibit their full resolution. We find that, despite five years of technological progress, the six categories of technological impediments remain relevant. Furthermore, the maturation of e-Commerce increased the Internet\u27s complexity, making these impediments harder to address. Two kinds of complexity are especially relevant: evolutionary complexity, and skill complexity. Evolutionary complexity refers to the need to preserve the existing Internet and resolve impediments simultaneously. Unfortunately, because the Internet consists of multiple incompatible technologies, philosophies, and attitudes, additions to the Internet infrastructure are difficult to integrate. Skill complexity refers to the skill sets necessary for managing e-Commerce change. As the Internet evolves, more skills become relevant. Unfortunately, individuals, companies and organizations are unable to master and integrate all necessary skills. As a result, new features added to the Internet do not consider all relevant factors, and are thus sub-optimal. NOTE THAT THIS ARTICLE IS APPROXIMATELY 600kb. IF YOU USE A SLOW MODEM, IT MAY TAKE A WHILE TO LOA

    Virtual Cluster Management for Analysis of Geographically Distributed and Immovable Data

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics and Computing, 2015Scenarios exist in the era of Big Data where computational analysis needs to utilize widely distributed and remote compute clusters, especially when the data sources are sensitive or extremely large, and thus unable to move. A large dataset in Malaysia could be ecologically sensitive, for instance, and unable to be moved outside the country boundaries. Controlling an analysis experiment in this virtual cluster setting can be difficult on multiple levels: with setup and control, with managing behavior of the virtual cluster, and with interoperability issues across the compute clusters. Further, datasets can be distributed among clusters, or even across data centers, so that it becomes critical to utilize data locality information to optimize the performance of data-intensive jobs. Finally, datasets are increasingly sensitive and tied to certain administrative boundaries, though once the data has been processed, the aggregated or statistical result can be shared across the boundaries. This dissertation addresses management and control of a widely distributed virtual cluster having sensitive or otherwise immovable data sets through a controller. The Virtual Cluster Controller (VCC) gives control back to the researcher. It creates virtual clusters across multiple cloud platforms. In recognition of sensitive data, it can establish a single network overlay over widely distributed clusters. We define a novel class of data, notably immovable data that we call "pinned data", where the data is treated as a first-class citizen instead of being moved to where needed. We draw from our earlier work with a hierarchical data processing model, Hierarchical MapReduce (HMR), to process geographically distributed data, some of which are pinned data. The applications implemented in HMR use extended MapReduce model where computations are expressed as three functions: Map, Reduce, and GlobalReduce. Further, by facilitating information sharing among resources, applications, and data, the overall performance is improved. Experimental results show that the overhead of VCC is minimum. The HMR outperforms traditional MapReduce model while processing a particular class of applications. The evaluations also show that information sharing between resources and application through the VCC shortens the hierarchical data processing time, as well satisfying the constraints on the pinned data

    Muse, 2006-02-02

    Get PDF
    Memorial University of Newfoundland's student newspaper, providing coverage of university life as well as national and international news relating to students.Frequency: biweekly, 1951-present. Not published: March 1954? - April 1955. Includes advertisements

    Vegetation dynamics in northern south America on different time scales

    Get PDF
    The overarching goal of this doctoral thesis was to understand the dynamics of vegetation activity occurring across time scales globally and in a regional context. To achieve this, I took advantage of open data sets, novel mathematical approaches for time series analyses, and state-of-the-art technology to effectively manipulate and analyze time series data. Specifically, I disentangled the longest records of vegetation greenness (>30 years) in tandem with climate variables at 0.05° for a global scale analysis (Chapter 3). Later, I focused my analysis on a particular region, northern South America (NSA), to evaluate vegetation activity at seasonal (Chapter 4) and interannual scales (Chapter 5) using moderate spatial resolution (0.0083°). Two main approaches were used in this research; time series decomposition through the Fast Fourier Transformation (FFT), and dimensionality reduction analysis through Principal Component Analysis (PCA). Overall, assessing vegetation-climate dynamics at different temporal scales facilitates the observation and understanding of processes that are often obscured by one or few dominant processes. On the one hand, the global analysis showed the dominant seasonality of vegetation and temperature in northern latitudes in comparison with the heterogeneous patterns of the tropics, and the remarkable longer-term oscillations in the southern hemisphere. On the other hand, the regional analysis showed the complex and diverse land-atmosphere interactions in NSA when assessing seasonality and interannual variability of vegetation activity associated with ENSO. In conclusion, disentangling these processes and assessing them separately allows one to formulate new hypotheses of mechanisms in ecosystem functioning, reveal hidden patterns of climate-vegetation interactions, and inform about vegetation dynamics relevant for ecosystem conservation and management

    Virtual Machine Image Management for Elastic Resource Usage in Grid Computing

    Get PDF
    Grid Computing has evolved from an academic concept to a powerful paradigm in the area of high performance computing (HPC). Over the last few years, powerful Grid computing solutions were developed that allow the execution of computational tasks on distributed computing resources. Grid computing has recently attracted many commercial customers. To enable commercial customers to be able to execute sensitive data in the Grid, strong security mechanisms must be put in place to secure the customers' data. In contrast, the development of Cloud Computing, which entered the scene in 2006, was driven by industry: it was designed with respect to security from the beginning. Virtualization technology is used to separate the users e.g., by putting the different users of a system inside a virtual machine, which prevents them from accessing other users' data. The use of virtualization in the context of Grid computing has been examined early and was found to be a promising approach to counter the security threats that have appeared with commercial customers. One main part of the work presented in this thesis is the Image Creation Station (ICS), a component which allows users to administer their virtual execution environments (virtual machines) themselves and which is responsible for managing and distributing the virtual machines in the entire system. In contrast to Cloud computing, which was designed to allow even inexperienced users to execute their computational tasks in the Cloud easily, Grid computing is much more complex to use. The ICS makes it easier to use the Grid by overcoming traditional limitations like installing needed software on the compute nodes that users use to execute the computational tasks. This allows users to bring commercial software to the Grid for the first time, without the need for local administrators to install the software to computing nodes that are accessible by all users. Moreover, the administrative burden is shifted from the local Grid site's administrator to the users or experienced software providers that allow the provision of individually tailored virtual machines to each user. But the ICS is not only responsible for enabling users to manage their virtual machines themselves, it also ensures that the virtual machines are available on every site that is part of the distributed Grid system. A second aspect of the presented solution focuses on the elasticity of the system by automatically acquiring free external resources depending on the system's current workload. In contrast to existing systems, the presented approach allows the system's administrator to add or remove resource sets during runtime without needing to restart the entire system. Moreover, the presented solution allows users to not only use existing Grid resources but allows them to scale out to Cloud resources and use these resources on-demand. By ensuring that unused resources are shut down as soon as possible, the computational costs of a given task are minimized. In addition, the presented solution allows each user to specify which resources can be used to execute a particular job. This is useful when a job processes sensitive data e.g., that is not allowed to leave the company. To obtain a comparable function in today's systems, a user must submit her computational task to a particular resource set, losing the ability to automatically schedule if more than one set of resources can be used. In addition, the proposed solution prioritizes each set of resources by taking different metrics into account (e.g. the level of trust or computational costs) and tries to schedule the job to resources with the highest priority first. It is notable that the priority often mimics the physical distance from the resources to the user: a locally available Cluster usually has a higher priority due to the high level of trust and the computational costs, that are usually lower than the costs of using Cloud resources. Therefore, this scheduling strategy minimizes the costs of job execution by improving security at the same time since data is not necessarily transferred to remote resources and the probability of attacks by malicious external users is minimized. Bringing both components together results in a system that adapts automatically to the current workload by using external (e.g., Cloud) resources together with existing locally available resources or Grid sites and provides individually tailored virtual execution environments to the system's users
    corecore