2,586 research outputs found

    Adaptive multibeam phased array design for a Spacelab experiment

    Get PDF
    The parametric tradeoff analyses and design for an Adaptive Multibeam Phased Array (AMPA) for a Spacelab experiment are described. This AMPA Experiment System was designed with particular emphasis to maximize channel capacity and minimize implementation and cost impacts for future austere maritime and aeronautical users, operating with a low gain hemispherical coverage antenna element, low effective radiated power, and low antenna gain-to-system noise temperature ratio

    Fast Multivariate Search on Large Aviation Datasets

    Get PDF
    Multivariate Time-Series (MTS) are ubiquitous, and are generated in areas as disparate as sensor recordings in aerospace systems, music and video streams, medical monitoring, and financial systems. Domain experts are often interested in searching for interesting multivariate patterns from these MTS databases which can contain up to several gigabytes of data. Surprisingly, research on MTS search is very limited. Most existing work only supports queries with the same length of data, or queries on a fixed set of variables. In this paper, we propose an efficient and flexible subsequence search framework for massive MTS databases, that, for the first time, enables querying on any subset of variables with arbitrary time delays between them. We propose two provably correct algorithms to solve this problem (1) an R-tree Based Search (RBS) which uses Minimum Bounding Rectangles (MBR) to organize the subsequences, and (2) a List Based Search (LBS) algorithm which uses sorted lists for indexing. We demonstrate the performance of these algorithms using two large MTS databases from the aviation domain, each containing several millions of observations Both these tests show that our algorithms have very high prune rates (>95%) thus needing actua

    Privacy preserving path recommendation for moving user on location based service

    Get PDF
    With the increasing adoption of location based services, privacy is becoming a major concern. To hide the identity and location of a request on location based service, most methods consider a set of users in a reasonable region so as to confuse their requests. When there are not enough users, the cloaking region needs expanding to a larger area or the response needs delay. Either way degrades the quality-of-service. In this paper, we tackle the privacy problem in a predication way by recommending a privacy-preserving path for a requester. We consider the popular navigation application, where users may continuously query different location based servers during their movements. Based on a set of metrics on privacy, distance and the quality of services that a LBS requester often desires, a secure path is computed for each request according to user's preference, and can be dynamically adjusted when the situation is changed. A set of experiments are performed to verify our method and the relationship between parameters are discussed in details. We also discuss how to apply our method into practical applications. © 2013 IEEE.published_or_final_versio

    Seabasing and joint expeditionary logistics

    Get PDF
    Student Integrated ProjectIncludes supplementary material. Executive Summary and Presentation.Recent conflicts such as Operation Desert Shield/Storm and Operation Iraqi Freedom highlight the logistics difficulties the United States faces by relying on foreign access and infrastructure and large supply stockpiles ashore to support expeditionary operations. The Navy's transformational vision for the future, Sea Power 21, involves Seabasing as a way to address these difficulties by projecting and sustaining joint forces globally from the sea. This study analyzes logistics flow to, within and from a Sea Base to an objective, and the architectures and systems needed to rapidly deploy and sustain a brigade-size force. Utilizing the Joint Capabilities Integration and Development System (JCIDS), this study incorporates a systems engineering framework to examine current systems, programs of record and proposed systems out to the year 2025. Several capability gaps that hamper a brigade-size force from seizing the initiative anywhere in the world within a 10-day period point to a need for dedicated lift assets, such as high-speed surface ships or lighter-than-air ships, to facilitate the rapid formation of the Sea Base. Additionally, the study identifies the need for large-payload/high-speed or load-once/direct-to- objective connector capabilities to minimize the number of at-sea transfers required to employ such a force from the Sea Base in 10 hrs. With these gaps addressed, the Joint Expeditionary Brigade is supportable from the Sea Base.http://archive.org/details/seabasingndjoint109456918N

    Use of the space shuttle to avoid spacecraft anomalies

    Get PDF
    An existing data base covering 304 spacecraft of the U.S. space program was analyzed to determine the effect on individual spacecraft failures and other anomalies that the space shuttle might have had if it had been operational throughout the period covered by the data. By combining the results of this analysis, information on the prelaunch activities of selected spacecraft programs, and shuttle capabilities data, the potential impact of the space shuttle on future space programs was derived. The shuttle was found to be highly effective in the prevention or correction of spacecraft anomalies, with 887 of 1,230 anomalies analyzed being favorably impacted by full utilization of shuttle capabilities. The shuttle was also determined to have a far-reaching and favorable influence on the design, development, and test phases of future space programs. This is documented in 37 individual statements of impact

    Density-based algorithms for active and anytime clustering

    Get PDF
    Data intensive applications like biology, medicine, and neuroscience require effective and efficient data mining technologies. Advanced data acquisition methods produce a constantly increasing volume and complexity. As a consequence, the need of new data mining technologies to deal with complex data has emerged during the last decades. In this thesis, we focus on the data mining task of clustering in which objects are separated in different groups (clusters) such that objects inside a cluster are more similar than objects in different clusters. Particularly, we consider density-based clustering algorithms and their applications in biomedicine. The core idea of the density-based clustering algorithm DBSCAN is that each object within a cluster must have a certain number of other objects inside its neighborhood. Compared with other clustering algorithms, DBSCAN has many attractive benefits, e.g., it can detect clusters with arbitrary shape and is robust to outliers, etc. Thus, DBSCAN has attracted a lot of research interest during the last decades with many extensions and applications. In the first part of this thesis, we aim at developing new algorithms based on the DBSCAN paradigm to deal with the new challenges of complex data, particularly expensive distance measures and incomplete availability of the distance matrix. Like many other clustering algorithms, DBSCAN suffers from poor performance when facing expensive distance measures for complex data. To tackle this problem, we propose a new algorithm based on the DBSCAN paradigm, called Anytime Density-based Clustering (A-DBSCAN), that works in an anytime scheme: in contrast to the original batch scheme of DBSCAN, the algorithm A-DBSCAN first produces a quick approximation of the clustering result and then continuously refines the result during the further run. Experts can interrupt the algorithm, examine the results, and choose between (1) stopping the algorithm at any time whenever they are satisfied with the result to save runtime and (2) continuing the algorithm to achieve better results. Such kind of anytime scheme has been proven in the literature as a very useful technique when dealing with time consuming problems. We also introduced an extended version of A-DBSCAN called A-DBSCAN-XS which is more efficient and effective than A-DBSCAN when dealing with expensive distance measures. Since DBSCAN relies on the cardinality of the neighborhood of objects, it requires the full distance matrix to perform. For complex data, these distances are usually expensive, time consuming or even impossible to acquire due to high cost, high time complexity, noisy and missing data, etc. Motivated by these potential difficulties of acquiring the distances among objects, we propose another approach for DBSCAN, called Active Density-based Clustering (Act-DBSCAN). Given a budget limitation B, Act-DBSCAN is only allowed to use up to B pairwise distances ideally to produce the same result as if it has the entire distance matrix at hand. The general idea of Act-DBSCAN is that it actively selects the most promising pairs of objects to calculate the distances between them and tries to approximate as much as possible the desired clustering result with each distance calculation. This scheme provides an efficient way to reduce the total cost needed to perform the clustering. Thus it limits the potential weakness of DBSCAN when dealing with the distance sparseness problem of complex data. As a fundamental data clustering algorithm, density-based clustering has many applications in diverse fields. In the second part of this thesis, we focus on an application of density-based clustering in neuroscience: the segmentation of the white matter fiber tracts in human brain acquired from Diffusion Tensor Imaging (DTI). We propose a model to evaluate the similarity between two fibers as a combination of structural similarity and connectivity-related similarity of fiber tracts. Various distance measure techniques from fields like time-sequence mining are adapted to calculate the structural similarity of fibers. Density-based clustering is used as the segmentation algorithm. We show how A-DBSCAN and A-DBSCAN-XS are used as novel solutions for the segmentation of massive fiber datasets and provide unique features to assist experts during the fiber segmentation process.Datenintensive Anwendungen wie Biologie, Medizin und Neurowissenschaften erfordern effektive und effiziente Data-Mining-Technologien. Erweiterte Methoden der Datenerfassung erzeugen stetig wachsende Datenmengen und Komplexit\"at. In den letzten Jahrzehnten hat sich daher ein Bedarf an neuen Data-Mining-Technologien f\"ur komplexe Daten ergeben. In dieser Arbeit konzentrieren wir uns auf die Data-Mining-Aufgabe des Clusterings, in der Objekte in verschiedenen Gruppen (Cluster) getrennt werden, so dass Objekte in einem Cluster untereinander viel \"ahnlicher sind als Objekte in verschiedenen Clustern. Insbesondere betrachten wir dichtebasierte Clustering-Algorithmen und ihre Anwendungen in der Biomedizin. Der Kerngedanke des dichtebasierten Clustering-Algorithmus DBSCAN ist, dass jedes Objekt in einem Cluster eine bestimmte Anzahl von anderen Objekten in seiner Nachbarschaft haben muss. Im Vergleich mit anderen Clustering-Algorithmen hat DBSCAN viele attraktive Vorteile, zum Beispiel kann es Cluster mit beliebiger Form erkennen und ist robust gegen\"uber Ausrei{\ss}ern. So hat DBSCAN in den letzten Jahrzehnten gro{\ss}es Forschungsinteresse mit vielen Erweiterungen und Anwendungen auf sich gezogen. Im ersten Teil dieser Arbeit wollen wir auf die Entwicklung neuer Algorithmen eingehen, die auf dem DBSCAN Paradigma basieren, um mit den neuen Herausforderungen der komplexen Daten, insbesondere teurer Abstandsma{\ss}e und unvollst\"andiger Verf\"ugbarkeit der Distanzmatrix umzugehen. Wie viele andere Clustering-Algorithmen leidet DBSCAN an schlechter Per- formanz, wenn es teuren Abstandsma{\ss}en f\"ur komplexe Daten gegen\"uber steht. Um dieses Problem zu l\"osen, schlagen wir einen neuen Algorithmus vor, der auf dem DBSCAN Paradigma basiert, genannt Anytime Density-based Clustering (A-DBSCAN), der mit einem Anytime Schema funktioniert. Im Gegensatz zu dem urspr\"unglichen Schema DBSCAN, erzeugt der Algorithmus A-DBSCAN zuerst eine schnelle Ann\"aherung des Clusterings-Ergebnisses und verfeinert dann kontinuierlich das Ergebnis im weiteren Verlauf. Experten k\"onnen den Algorithmus unterbrechen, die Ergebnisse pr\"ufen und w\"ahlen zwischen (1) Anhalten des Algorithmus zu jeder Zeit, wann immer sie mit dem Ergebnis zufrieden sind, um Laufzeit sparen und (2) Fortsetzen des Algorithmus, um bessere Ergebnisse zu erzielen. Eine solche Art eines "Anytime Schemas" ist in der Literatur als eine sehr n\"utzliche Technik erprobt, wenn zeitaufwendige Problemen anfallen. Wir stellen auch eine erweiterte Version von A-DBSCAN als A-DBSCAN-XS vor, die effizienter und effektiver als A-DBSCAN beim Umgang mit teuren Abstandsma{\ss}en ist. Da DBSCAN auf der Kardinalit\"at der Nachbarschaftsobjekte beruht, ist es notwendig, die volle Distanzmatrix auszurechen. F\"ur komplexe Daten sind diese Distanzen in der Regel teuer, zeitaufwendig oder sogar unm\"oglich zu errechnen, aufgrund der hohen Kosten, einer hohen Zeitkomplexit\"at oder verrauschten und fehlende Daten. Motiviert durch diese m\"oglichen Schwierigkeiten der Berechnung von Entfernungen zwischen Objekten, schlagen wir einen anderen Ansatz f\"ur DBSCAN vor, namentlich Active Density-based Clustering (Act-DBSCAN). Bei einer Budgetbegrenzung B, darf Act-DBSCAN nur bis zu B ideale paarweise Distanzen verwenden, um das gleiche Ergebnis zu produzieren, wie wenn es die gesamte Distanzmatrix zur Hand h\"atte. Die allgemeine Idee von Act-DBSCAN ist, dass es aktiv die erfolgversprechendsten Paare von Objekten w\"ahlt, um die Abst\"ande zwischen ihnen zu berechnen, und versucht, sich so viel wie m\"oglich dem gew\"unschten Clustering mit jeder Abstandsberechnung zu n\"ahern. Dieses Schema bietet eine effiziente M\"oglichkeit, die Gesamtkosten der Durchf\"uhrung des Clusterings zu reduzieren. So schr\"ankt sie die potenzielle Schw\"ache des DBSCAN beim Umgang mit dem Distance Sparseness Problem von komplexen Daten ein. Als fundamentaler Clustering-Algorithmus, hat dichte-basiertes Clustering viele Anwendungen in den unterschiedlichen Bereichen. Im zweiten Teil dieser Arbeit konzentrieren wir uns auf eine Anwendung des dichte-basierten Clusterings in den Neurowissenschaften: Die Segmentierung der wei{\ss}en Substanz bei Faserbahnen im menschlichen Gehirn, die vom Diffusion Tensor Imaging (DTI) erfasst werden. Wir schlagen ein Modell vor, um die \"Ahnlichkeit zwischen zwei Fasern als einer Kombination von struktureller und konnektivit\"atsbezogener \"Ahnlichkeit von Faserbahnen zu beurteilen. Verschiedene Abstandsma{\ss}e aus Bereichen wie dem Time-Sequence Mining werden angepasst, um die strukturelle \"Ahnlichkeit von Fasern zu berechnen. Dichte-basiertes Clustering wird als Segmentierungsalgorithmus verwendet. Wir zeigen, wie A-DBSCAN und A-DBSCAN-XS als neuartige L\"osungen f\"ur die Segmentierung von sehr gro{\ss}en Faserdatens\"atzen verwendet werden, und bieten innovative Funktionen, um Experten w\"ahrend des Fasersegmentierungsprozesses zu unterst\"utzen

    Review on bibliography related to antimicrobials

    Get PDF
    In this report, a bibliographic research has been done in the field of antimicrobials.In this report, a bibliographic research has been done in the field of antimicrobials. Not all antimicrobials have been included, but those that are being subject of matter in the group GBMI in Terrassa, and others of interest. It includes chitosan and other biopolymers. The effect of nanoparticles is of great interest, and in this sense, the effect of Ag nanoparticles and antibiotic nanoparticles (nanobiotics) has been revised. The report focuses on new publications and the antimicrobial effect of peptides has been considered. In particular, the influence of antimicrobials on membranes has deserved much attention and its study using the Langmuir technique, which is of great utility on biomimetic studies. The building up of antimicrobials systems with new techniques (bottom-up approach), as the Layer-by-Layer technique, can also be found in between the bibliography. It has also been considered the antibiofilm effect, and the new ideas on quorem sensing and quorum quenching.Preprin

    Historical Places In Malacca (Enhancement Of Maps Manipulation Capability Through The Website) Using MySQL

    Get PDF
    The motivation to be involved with the field of study regarding GIS, has been emerging in a fast pace in these few years.Much research had been done and performed, giving tremendous and beneficial results towards this field. But most ofthe GIS applications were developed usingthe vendors ownproprietary database, in which, this could promote many problems. Geographic Information Systems alsoknown as GIS, are all about gathering data andthen building layers upon layers of this dataand then displaying them on a computer screen. The aim and the objective of the study done through this paper wouldbe in usingMySQLfor developing a GIS application, thus showingMySQL's ability for supporting GIS-based data, or in the otherword, the spatial data. While the main objective in doing the study and developing the particular system is mainly using MySQL in managing the spatial data, the otherintegral objectives which comes along with this project are, providing better features and quality spatial data features from the system for the users and also enhancing the capability of manipulating the maps, which are provided through the system. TheMethodology beingused in developing this project is according to the RAD Methodology, which involved the stages such as Requirement Planning, User Design, Construction, and Implementation. These stages would be further discussed through Chapter 3 of Methodology and Projectwork. And as for the Conclusion, which could be derived from the entire project, from the research being done, it could be seenthat, MySQL is able to support in the development of any GIS - based application through thenew released of its database which also included the spatial data management ability

    New model framework and structure and the commonality evaluation model

    Get PDF
    The development of a framework and structure for shuttle era unmanned spacecraft projects and the development of a commonality evaluation model is documented. The methodology developed for model utilization in performing cost trades and comparative evaluations for commonality studies is discussed. The model framework consists of categories of activities associated with the spacecraft system's development process. The model structure describes the physical elements to be treated as separate identifiable entities. Cost estimating relationships for subsystem and program-level components were calculated
    • …
    corecore