193 research outputs found
A general framework for efficient FPGA implementation of matrix product
Original article can be found at: http://www.medjcn.com/ Copyright Softmotor LimitedHigh performance systems are required by the developers for fast processing of computationally intensive applications. Reconfigurable hardware devices in the form of Filed-Programmable Gate Arrays (FPGAs) have been proposed as viable system building blocks in the construction of high performance systems at an economical price. Given the importance and the use of matrix algorithms in scientific computing applications, they seem ideal candidates to harness and exploit the advantages offered by FPGAs. In this paper, a system for matrix algorithm cores generation is described. The system provides a catalog of efficient user-customizable cores, designed for FPGA implementation, ranging in three different matrix algorithm categories: (i) matrix operations, (ii) matrix transforms and (iii) matrix decomposition. The generated core can be either a general purpose or a specific application core. The methodology used in the design and implementation of two specific image processing application cores is presented. The first core is a fully pipelined matrix multiplier for colour space conversion based on distributed arithmetic principles while the second one is a parallel floating-point matrix multiplier designed for 3D affine transformations.Peer reviewe
Air Force Institute of Technology Research Report 2020
This Research Report presents the FY20 research statistics and contributions of the Graduate School of Engineering and Management (EN) at AFIT. AFIT research interests and faculty expertise cover a broad spectrum of technical areas related to USAF needs, as reflected by the range of topics addressed in the faculty and student publications listed in this report. In most cases, the research work reported herein is directly sponsored by one or more USAF or DOD agencies. AFIT welcomes the opportunity to conduct research on additional topics of interest to the USAF, DOD, and other federal organizations when adequate manpower and financial resources are available and/or provided by a sponsor. In addition, AFIT provides research collaboration and technology transfer benefits to the public through Cooperative Research and Development Agreements (CRADAs). Interested individuals may discuss ideas for new research collaborations, potential CRADAs, or research proposals with individual faculty using the contact information in this document
Mathematics & Statistics 2017 APR Self-Study & Documents
UNM Mathematics & Statistics APR self-study report, review team report, response report, and initial action plan for Spring 2017, fulfilling requirements of the Higher Learning Commission
Approaches to detect SQL injection and XSS in web applications
ABSTRACT We are increasingly relying on web, and accessing important information as well as transmitting data through it. At the same time, quantity and impact of security vulnerabilities in such applications has grown as well. Billions of transactions are performed online with the help of various kinds of web applications. Almost in all of them user is authenticated before providing access to backend database for storing all the information. In this whole scenario a well-designed injection can provide access to malicious or unauthorized users and mostly achieved through SQL injection and Crosssite scripting (XSS). In this paper we are going to provide a detailed survey of various kinds of SQL injection, XSS attacks and approaches to detect and prevent them. Furthermore we are also going to provide a comparative analysis of different approaches against these attacks. And then we are also going to present our findings and note down future expectations and expected development of counter measures against these attacks
A Comprehensive Digital Forensic Investigation Model and Guidelines for Establishing Admissible Digital Evidence
Information technology systems are attacked by offenders using digital devices and networks to facilitate their crimes and hide their identities, creating new challenges for digital investigators. Malicious programs that exploit vulnerabilities also serve as threats to digital investigators. Since digital devices such as computers and networks are used by organisations and digital investigators, malicious programs and risky practices that may contaminate the integrity of digital evidence can lead to loss of evidence. For some reasons, digital investigators face a major challenge in preserving the integrity of digital evidence. Not only is there no definitive comprehensive model of digital forensic investigation for ensuring the reliability of digital evidence, but there has to date been no intensive research into methods of doing so.
To address the issue of preserving the integrity of digital evidence, this research improves upon other digital forensic investigation model by creating a Comprehensive Digital Forensic Investigation Model (CDFIM), a model that results in an improvement in the investigation process, as well as security mechanism and guidelines during investigation. The improvement is also effected by implementing Proxy Mobile Internet Protocol version 6 (PMIPv6) with improved buffering based on Open Air Interface PIMIPv6 (OAI PMIPv6) implementation to provide reliable services during handover in Mobile Node (MN) and improve performance measures to minimize loss of data which this research identified as a factor affecting the integrity of digital evidence. The advantage of this is to present that the integrity of digital evidence can be preserved if loss of data is prevented.
This research supports the integration of security mechanism and intelligent software in digital forensic investigation which assist in preserving the integrity of digital evidence by conducting experiments which carried out two different attack experiment to test CDFIM. It found that when CDFIM used security mechanism and guidelines with the investigation process, it was able to identify the attack and also ensured that the integrity of the digital evidence was preserved. It was also found that the security mechanism and guidelines incorporated in the digital investigative process are useless when the security guidelines are ignored by digital investigators, thus posing a threat to the integrity of digital evidence
Parallel database operations in heterogeneous environments
Im Gegensatz zu dem traditionellen Begriff eines Supercomputers, der aus vielen mittels
superschneller, lokaler Netzwerkverbindungen miteinander verbundenen Superrechnern
besteht, basieren heterogene Computerumgebungen auf "kompletten" Computersystemen,
die mit Hilfe eines herkömmlichen Netzwerkanschlusses an private oder öffentliche Netzwerke angeschlossen sind. Der Bereich des Computernetzwerkens hat sich über die letzten drei Jahrzehnte entwickelt und ist, wie viele andere Technologien, in bezug auf Performance, Funktionalität und Verlässlichkeit extrem gewachsen. Zu Beginn des 21.Jahrhunderts zählt das betriebssichere Hochgeschwindigkeitsnetz genauso zur Alltäglichkeit wie Elektrizität, und auch Rechnerressourcen sind, was Verfügbarkeit und universellen Gebrauch anbelangt, ebenso Standard wie elektrischer Strom.
Wissenschafter haben fĂĽr die Verwendung von heterogenen Grids bei verschiedenen rechenintensiven Applikationen eine Architektur von computational Grids konzipiert und
darin Modelle aufgesetzt, die zum einen Rechenleistungen defnieren und zum anderen
die komplexen Eigenschaften der Grid-Organisation vor den Benutzern verborgen halten.
Somit wird die Verwendung für den Benutzer genauso einfach wie es möglich ist elektrischen Strom zu beziehen. Grundsätzlich existiert keine generell akzeptierte Definition für Grids. Einige Wissenschafter bezeichnen sie als hochleistungsfähige verteilte Umgebung.
Manche berĂĽcksichtigen bei der Definierung auch die geographische Verteilung und ihre
Multi-Domain-Eigenschaft. Andere Wissenschafter wiederum definieren Grids ĂĽber die
Anzahl der Ressourcen, die sie verbinden.
Parallele Datenbanksysteme haben in den letzten zwei Jahrzehnten groĂźe Bedeutung
erlangt, da das rechenintensive wissenschaftliche Arbeiten, wie z.B. auf dem Gebiet der
Bioinformatik, Strömungslehre und Hochenergie physik die Verarbeitung riesiger verteilter
Datensätze erfordert. Diese Tendenz resultierte daraus, dass man von der fehlgeschlagenen
Entwicklung hochspezialisierter Datenbankmaschinen zur Verwendung herkömmlicher
paralleler Hardware-Architekturen übergegangen ist. Grundsätzlich wird die gleichzeitige
Abarbeitung entweder durch verteilte Datenbankoperationen oder durch Datenparallelität
gelöst. Im ersten Fall wird ein unterteilter Abfragenabarbeitungsplan durch verschiedene
Datenbankoperatoren parallel durchgeführt. Im Fall der Datenparallelität erfolgt eine
Unterteilung der Daten, wobei mehrere Prozessoren die gleichen Operationen parallel an
Teilen der Daten durchfĂĽhren.
Es liegen genaue Analysen von parallelen Datenbank-Arbeitsvorgängen für sequenzielle
Prozessoren vor. Eine Reihe von Publikationen haben dieses Thema abgehandelt
und dabei Vorschläge und Analysen für parallele Datenbankmaschinen erstellt.
Bis dato existiert allerdings noch keine spezifische Analyse paralleler Algorithmen mit dem
Fokus der speziellen Eigenschaften einer "Grid"-Infrastruktur.
Der spezifische Unterschied liegt in der Heterogenität von Grid-Ressourcen. In "shared
nothing"-Architekturen, wie man sie bei klassischen Supercomputern und Cluster-
Systemen vorfindet, sind alle Ressourcen wie z.B. Verarbeitungsknoten, Festplatten und
Netzwerkverbindungen angesichts ihrer Leistung, Zugriffszeit und Bandbreite ĂĽblicherweise
gleich (homogen). Im Gegensatz dazu zeigen Grid-Architekturen heterogene Ressourcen
mit verschiedenen Leistungseigenschaften. Der herausfordernde Aspekt dieser Arbeit bestand
darin aufzuzeigen, wie man das Problem heterogener Ressourcen löst, d.h. diese Ressourcen einerseits zur Leistungsmaximierung und andererseits zur Definition von Algorithmen
einsetzt, um die Arbeitsablauf-Orchestrierung von Datenbankprozessoren zu
optimieren.
Um dieser Herausforderung gerecht werden zu können, wurde ein mathematisches Modell
zur Untersuchung des Leistungsverhaltens paralleler Datenbankoperationen in heterogenen
Umgebungen, wie z.B. in Grids, basierend auf generalisierten Multiprozessor-
Architekturen entwickelt. Es wurden dabei sowohl die Parameter und deren Einfluss auf
die Leistung als auch das Verhalten der Algorithmen in heterogenen Umgebungen beobachtet.
Dabei konnte man feststellen, dass kleine Anpassungen an den Algorithmen zur
signifikanten Leistungsverbesserung heterogener Umgebungen fĂĽhren. Weiters wurde eine
graphische Darstellung der Knotenkonfiguration entwickelt und ein optimierter Algorithmus,
mit dem ein optimaler Knoten zur AusfĂĽhrung von Datenbankoperationen gefunden
werden kann.
Diese Ergebnisse zum neuen Algorithmus wurden durch die Implementierung in einer serviceorientierten Architektur (SODA) bestätigt. Durch diese Implementierung konnte
die GĂĽltigkeit des Modells und des neu entwickelten optimierten Algorithmus nachgewiesen
werden.
In dieser Arbeit werden auch die Möglichkeiten für eine brauchbare Erweiterung des
vorgestellten Modells gezeigt, wie z.B. für den Einsatz von Leistungskennziffern für Algorithmen zur Findung optimaler Knoten, die Verlässlichkeit der Knoten oder Vorgehensweisen/Lösungsaufgaben zur dynamischen Optimierung von Arbeitsabläufen.In contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus, heterogeneous computing environments rely on "complete" computer nodes (CPU, storage, network interface, etc.) connected to a private or public network by a conventional network interface. Computer networking has evolved over the past three decades, and, like many technologies, has grown exponentially in terms of performance, functionality and reliability. At the beginning of the twenty-first century, high-speed, highly reliable Internet connectivity has become as commonplace as electricity, and computing resources have become as standard in terms of availability and universal use as electrical power.
To use heterogeneous Grids for various applications requiring high-processing power, researchers propose the notion of computational Grids where rules are defined relating to both services and hiding the complexity of the Grid organization from the users. Thus, users would find it as easy to use as electrical power.
Generally, there is no widely accepted definition of Grids. Some researchers define it as a high-performance distributed environment. Some take into consideration its geographically distributed, multi-domain feature. Others define Grids based on the number of resources they unify.
Parallel database systems gained an important role in database research
over the past two decades due to the necessity of handling large distributed datasets for scientific computing such as bioinformatics, fluid dynamics and high energy physics (HEP). This was connected with the shift from the (actually failed) development of highly specialized database machines to the usage of conventional parallel hardware architectures. Generally, concurrent execution is employed either by database operator or data parallelism. The first is achieved through parallel execution of a partitioned query execution plan by different operators, while the latter is achieved through parallel execution of the same operation on the partitioned data among multiple processors.
Parallel database operation algorithms have been well analyzed for sequential processors. A number of publications have covered this topic which proposed and analyzed these algorithms for parallel database machines. Until now, to the best knowledge of the author, no specific analysis has been done so far on parallel algorithms with a focus on the specific characteristics of a Grid infrastructure.
The specific difference lies in the heterogeneous nature of Grid resources. In a "shared nothing architecture", which can be found in classical supercomputers and cluster systems, all resources such as processing nodes, disks and network interconnection have typically homogeneous characteristics as regards to performance, access time and bandwidth.
In contrast, in a Grid architecture heterogeneous resources are found that show different performance characteristics. The challenge of this research is to discover the way how to cope with or to exploit this situation to maximize performance and to define algorithms that lead to a solution for an optimized workflow orchestration.
To address this challenge, we developed a mathematical model to investigate the performance behavior of parallel database operations in heterogeneous environments, such as a Grid, based on generalized multiprocessor architecture. We also studied the parameters and their influence on the performance as well as the behavior of the algorithms in heterogeneous environments. We discovered that only a small adjustment on the algorithm is necessary to significantly improve the performance for heterogeneous environments. A graphical representation of the node configuration and an optimized algorithm for finding the optimal node configuration for the execution of the parallel binary merge sort have been developed.
Finally, we have proved our findings of the new algorithm by implementing it on a service-orientated infrastructure (SODA). The model and our new developed modified algorithms have been verified with the implementation.
We also give an outlook of useful extensions to our model e.g. using performance indices, reliability of the nodes and approaches for dynamic optimization of workflow
SERVICE-BASED AUTOMATION OF SOFTWARE CONSTRUCTION ACTIVITIES
The reuse of software units, such as classes, components and services require professional
knowledge to be performed. Today a multiplicity of different software unit technologies,
supporting tools, and related activities used in reuse processes exist. Each of these relevant
reuse elements may also include a high number of variations and may differ in the level and
quality of necessary reuse knowledge. In such an environment of increasing variations and,
therefore, an increasing need for knowledge, software engineers must obtain such knowledge
to be able to perform software unit reuse activities. Today many different reuse activities exist
for a software unit. Some typical knowledge intensive activities are: transformation,
integration, and deployment. In addition to the problem of the amount of knowledge required
for such activities, other difficulties also exist. The global industrial environment makes it
challenging to identify sources of, and access to, knowledge. Typically, such sources (e.g.,
repositories) are made to search and retrieve information about software unitsand not about
the required reuse activity knowledge for a special unit. Additionally, the knowledge has to be
learned by inexperienced software engineers and, therefore, to be interpreted. This
interpretation may lead to variations in the reuse result and can differ from the estimated result
of the knowledge creator. This makes it difficult to exchange knowledge between software
engineers or global teams. Additionally, the reuse results of reuse activities have to be
repeatable and sustainable. In such a scenario, the knowledge about software reuse activities
has to be exchanged without the above mentioned problems by an inexperienced software
engineer. The literature shows a lack of techniques to store and subsequently distribute
relevant reuse activity knowledge among software engineers. The central aim of this thesis is
to enable inexperienced software engineers to use knowledge required to perform reuse
activities without experiencing the aforementioned problems. The reuse activities:
transformation, integration, and deployment, have been selected as the foundation for the
research. Based on the construction level of handling a software unit, these activities are
called Software Construction Activities (SCAcs) throughout the research. To achieve the aim,
specialised software construction activity models have been created and combined with an
abstract software unit model. As a result, different SCAc knowledge is described and
combined with different software unit artefacts needed by the SCAcs. Additionally, the
management (e.g., the execution of an SCAc) will be provided in a service-oriented
environment. Because of the focus on reuse activities, an approach which avoids changing the
knowledge level of software engineers and the abstraction view on software units and
activities, the object of the investigation differs from other approaches which aim to solve the
insufficient reuse activity knowledge problem. The research devised novel abstraction models
to describe SCAcs as knowledge models related to the relevant information of software units.
The models and the focused environment have been created using standard technologies. As a
result, these were realised easily in a real world environment. Softwareengineers were able to
perform single SCAcs without having previously acquired the necessary knowledge. The risk
of failing reuse decreases because single activities can be performed. The analysis of the
research results is based on a case study. An example of a reuse environmenthas been created
and tested in a case study to prove the operational capability of the approach. The main result
of the research is a proven concept enabling inexperienced software engineers to reuse
software units by reusing SCAcs. The research shows the reduction in time for reuse and a
decrease of learning effort is significant
Recommended from our members
Analysing usability and security issues in design and development of information systems
Recent technological advancements and the global economic challenges have meant that, individuals and businesses are constantly seeking new ways to exploit Information Systems (IS) and in manners that not only enhance user experiences and/or improve business processes and productivity, but also protect the individual‟s privacy and business assets for competitive advantage. Therefore, Information Systems need to be designed and developed to meet these challenges and/or other objectives. This thesis will delve primarily into the history of IS as a basis for establishing where the problem(s) lie or emanate from. It will focus on critically analysing existing Information Systems, and investigating the conflicting issues of usability and security, from an Information Systems Design and Development perspective by analysing various approaches. An in-depth review of literature and critical analysis of requirements necessary for the design and development of a usable and secure Information System will be carried out and will form the intellectual framework for this research. The premise therefore, is to look for a balanced approach or appropriate trade-off framework for designing usable-secure systems. The research will conclude with a discussion on how an envisaged conceptual framework or model can be developed based on certain influential factors, and how the framework can be experimentally evaluated, and to suggest areas for further improvement or future research
- …