29 research outputs found

    Detecting Abnormal Behavior in Web Applications

    Get PDF
    The rapid advance of web technologies has made the Web an essential part of our daily lives. However, network attacks have exploited vulnerabilities of web applications, and caused substantial damages to Internet users. Detecting network attacks is the first and important step in network security. A major branch in this area is anomaly detection. This dissertation concentrates on detecting abnormal behaviors in web applications by employing the following methodology. For a web application, we conduct a set of measurements to reveal the existence of abnormal behaviors in it. We observe the differences between normal and abnormal behaviors. By applying a variety of methods in information extraction, such as heuristics algorithms, machine learning, and information theory, we extract features useful for building a classification system to detect abnormal behaviors.;In particular, we have studied four detection problems in web security. The first is detecting unauthorized hotlinking behavior that plagues hosting servers on the Internet. We analyze a group of common hotlinking attacks and web resources targeted by them. Then we present an anti-hotlinking framework for protecting materials on hosting servers. The second problem is detecting aggressive behavior of automation on Twitter. Our work determines whether a Twitter user is human, bot or cyborg based on the degree of automation. We observe the differences among the three categories in terms of tweeting behavior, tweet content, and account properties. We propose a classification system that uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot or cyborg. Furthermore, we shift the detection perspective from automation to spam, and introduce the third problem, namely detecting social spam campaigns on Twitter. Evolved from individual spammers, spam campaigns manipulate and coordinate multiple accounts to spread spam on Twitter, and display some collective characteristics. We design an automatic classification system based on machine learning, and apply multiple features to classifying spam campaigns. Complementary to conventional spam detection methods, our work brings efficiency and robustness. Finally, we extend our detection research into the blogosphere to capture blog bots. In this problem, detecting the human presence is an effective defense against the automatic posting ability of blog bots. We introduce behavioral biometrics, mainly mouse and keyboard dynamics, to distinguish between human and bot. By passively monitoring user browsing activities, this detection method does not require any direct user participation, and improves the user experience

    A software based mentor system

    Get PDF
    This thesis describes the architecture, implementation issues and evaluation of Mentor - an educational support system designed to mentor students in their university studies. Students can ask (by typing) natural language questions and Mentor will use several educational paradigms to present information from its Knowledge Base or from data-mined online Web sites to respond. Typically the questions focus on the student’s assignments or in their preparation for their examinations. Mentor is also pro-active in that it prompts the student with questions such as "Have you started your assignment yet?". If the student responds and enters into a dialogue with Mentor, then, based upon the student’s questions and answers, it guides them through a Directed Learning Path planned by the lecturer, specific to that assessment. The objectives of the research were to determine if such a system could be designed, developed and applied in a large-scale, real-world environment and to determine if the resulting system was beneficial to students using it. The study was significant in that it provided an analysis of the design and implementation of the system as well as a detailed evaluation of its use. This research integrated the Computer Science disciplines of network communication, natural language parsing, user interface design and software agents, together with pedagogies from the Computer Aided Instruction and Intelligent Tutoring System fields of Education. Collectively, these disciplines provide the foundation for the two main thesis research areas of Dialogue Management and Tutorial Dialogue Systems. The development and analysis of the Mentor System required the design and implementation of an easy to use text based interface as well as a hyper- and multi-media graphical user interface, a client-server system, and a dialogue management system based on an extensible kernel. The multi-user Java-based client-server system used Perl-5 Regular Expression pattern matching for Natural Language Parsing along with a state-based Dialogue Manager and a Knowledge Base marked up using the XML-based Virtual Human Markup Language. The kernel was also used in other Dialogue Management applications such as with computer generated Talking Heads. The system also enabled a user to easily program their own knowledge into the Knowledge Base as well as to program new information retrieval or management tasks so that the system could grow with the user. The overall framework to integrate and manage the above components into a usable system employed suitable educational pedagogies that helped in the student’s learning process. The thesis outlines the learning paradigms used in, and summarises the evaluation of, three course-based Case Studies of university students’ perception of the system to see how effective and useful it was, and whether students benefited from using it. This thesis will demonstrate that Mentor met its objectives and was very successful in helping students with their university studies. As one participant indicated: ‘I couldn’t have done without it.

    and Cost/Benefits Opportunities

    Get PDF
    Acquisition Research Program Sponsored Report SeriesSponsored Acquisition Research & Technical ReportsThe acquisition of artificial intelligence (AI) systems is a relatively new challenge for the U.S. Department of Defense (DoD). Given the potential for high-risk failures of AI system acquisitions, it is critical for the acquisition community to examine new analytical and decision-making approaches to managing the acquisition of these systems in addition to the existing approaches (i.e., Earned Value Management, or EVM). In addition, many of these systems reside in small start-up or relatively immature system development companies, further clouding the acquisition process due to their unique business processes when compared to the large defense contractors. This can lead to limited access to data, information, and processes that are required in the standard DoD acquisition approach (i.e., the 5000 series). The well-known recurring problems in acquiring information technology automation within the DoD will likely be exacerbated in acquiring complex and risky AI systems. Therefore, more robust, agile, and analytically driven acquisition methodologies will be required to help avoid costly disasters in acquiring these kinds of systems. This research provides a set of analytical tools for acquiring organically developed AI systems through a comparison and contrast of the proposed methodologies that will demonstrate when and how each method can be applied to improve the acquisitions lifecycle for AI systems, as well as provide additional insights and examples of how some of these methods can be applied. This research identifies, reviews, and proposes advanced quantitative, analytically based methods within the integrated risk management (IRM)) and knowledge value added (KVA) methodologies to complement the current EVM approach. This research examines whether the various methodologies—EVM, KVA, and IRM—could be used within the Defense Acquisition System (DAS) to improve the acquisition of AI. While this paper does not recommend one of these methodologies over the other, certain methodologies, specifically IRM, may be more beneficial when used throughout the entire acquisition process instead of within a portion of the system. Due to this complexity of AI system, this research looks at AI as a whole and not specific types of AI.Approved for public release; distribution is unlimited.Approved for public release; distribution is unlimited

    Proceedings of the tenth international conference Models in developing mathematics education: September 11 - 17, 2009, Dresden, Saxony, Germany

    Get PDF
    This volume contains the papers presented at the International Conference on “Models in Developing Mathematics Education” held from September 11-17, 2009 at The University of Applied Sciences, Dresden, Germany. The Conference was organized jointly by The University of Applied Sciences and The Mathematics Education into the 21st Century Project - a non-commercial international educational project founded in 1986. The Mathematics Education into the 21st Century Project is dedicated to the improvement of mathematics education world-wide through the publication and dissemination of innovative ideas. Many prominent mathematics educators have supported and contributed to the project, including the late Hans Freudental, Andrejs Dunkels and Hilary Shuard, as well as Bruce Meserve and Marilyn Suydam, Alan Osborne and Margaret Kasten, Mogens Niss, Tibor Nemetz, Ubi D’Ambrosio, Brian Wilson, Tatsuro Miwa, Henry Pollack, Werner Blum, Roberto Baldino, Waclaw Zawadowski, and many others throughout the world. Information on our project and its future work can be found on Our Project Home Page http://math.unipa.it/~grim/21project.htm It has been our pleasure to edit all of the papers for these Proceedings. Not all papers are about research in mathematics education, a number of them report on innovative experiences in the classroom and on new technology. We believe that “mathematics education” is fundamentally a “practicum” and in order to be “successful” all new materials, new ideas and new research must be tested and implemented in the classroom, the real “chalk face” of our discipline, and of our profession as mathematics educators. These Proceedings begin with a Plenary Paper and then the contributions of the Principal Authors in alphabetical name order. We sincerely thank all of the contributors for their time and creative effort. It is clear from the variety and quality of the papers that the conference has attracted many innovative mathematics educators from around the world. These Proceedings will therefore be useful in reviewing past work and looking ahead to the future

    The development of GIS to aid conservation of architectural and archaeological sites using digital terrestrial photogrammetry

    Get PDF
    This thesis is concerned with the creation and implementation of an Architectural/Archaeological information System (A/AIS) by integrating digital terrestrial photogrammetry and CAD facilities as applicable to the requirements of architects, archaeologists and civil engineers. Architects and archaeologists are involved with the measurement, analysis and recording of the historical buildings and monuments. Hard-copy photogrammetric methods supporting such analyses and documentation are well established. But the requirement to interpret, classify and quantitatively process photographs can be time consuming. Also, they have limited application and cannot be re-examined if the information desired is not directly presented and a much more challenging extraction of 3-D coordinates than in a digital photogrammetric environment. The A/AIS has been developed to the point that it can provide a precise and reliable technique for non-contact 3-D measurements. The speed of on-line data acquisition, high degree of automation and adaptability has made this technique a powerful measurement tool with a great number of applications for architectural or archaeological sites. The designed tool (A/AIS) has been successful in producing the expected results in tasks examined for St. Avit Senieur Abbey in France, Strome Castle in Scotland, Gilbert Scott Building of Glasgow University, Hunter Memorial in Glasgow University and Anobanini Rock in Iran. The goals of this research were: to extract, using digital photogrammetric digitising, 3-D coordinates of architectural/archaeological features, to identify an appropriate 3-D model, to import 3-D points/lines into an appropriate 3-D modeller, to generate 3-D objects. to design and implement a prototype architectural Information System using the above 3-D model, to compare this approach to traditional approaches of measuring and archiving required information. An assessment of the contribution of digital photogrammetry, GIS and CAD to the surveying, conservation, recording and documentation of historical buildings and cultural monuments include digital rectification and restitution, feature extraction for the creation of 3-D digital models and the computer visualisation are the focus of this research

    National Aeronautics and Space Administration (NASA)/American Society for Engineering Education (ASEE) Summer Faculty Fellowship Program: 1995.

    Get PDF
    The JSC NASA/ASEE Summer Faculty Fellowship Program was conducted at JSC, including the White Sands Test Facility, by Texas A&M University and JSC. The objectives of the program, which began nationally in 1964 and at JSC in 1965, are (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate an exchange of ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of the participants' institutions; and (4) to contribute to the research objectives of the NASA centers. Each faculty fellow spent at least 10 weeks at JSC engaged in a research project in collaboration with a NASA/JSC colleague. In addition to the faculty participants, the 1995 program included five students. This document is a compilation of the final reports on the research projects completed by the faculty fellows and visiting students during the summer of 1995. The reports of two of the students are integral with that of the respective fellow. Three students wrote separate reports

    Sensor web geoprocessing on the grid

    Get PDF
    Recent standardisation initiatives in the fields of grid computing and geospatial sensor middleware provide an exciting opportunity for the composition of large scale geospatial monitoring and prediction systems from existing components. Sensor middleware standards are paving the way for the emerging sensor web which is envisioned to make millions of geospatial sensors and their data publicly accessible by providing discovery, task and query functionality over the internet. In a similar fashion, concurrent development is taking place in the field of grid computing whereby the virtualisation of computational and data storage resources using middleware abstraction provides a framework to share computing resources. Sensor web and grid computing share a common vision of world-wide connectivity and in their current form they are both realised using web services as the underlying technological framework. The integration of sensor web and grid computing middleware using open standards is expected to facilitate interoperability and scalability in near real-time geoprocessing systems. The aim of this thesis is to develop an appropriate conceptual and practical framework in which open standards in grid computing, sensor web and geospatial web services can be combined as a technological basis for the monitoring and prediction of geospatial phenomena in the earth systems domain, to facilitate real-time decision support. The primary topic of interest is how real-time sensor data can be processed on a grid computing architecture. This is addressed by creating a simple typology of real-time geoprocessing operations with respect to grid computing architectures. A geoprocessing system exemplar of each geoprocessing operation in the typology is implemented using contemporary tools and techniques which provides a basis from which to validate the standards frameworks and highlight issues of scalability and interoperability. It was found that it is possible to combine standardised web services from each of these aforementioned domains despite issues of interoperability resulting from differences in web service style and security between specifications. A novel integration method for the continuous processing of a sensor observation stream is suggested in which a perpetual processing job is submitted as a single continuous compute job. Although this method was found to be successful two key challenges remain; a mechanism for consistently scheduling real-time jobs within an acceptable time-frame must be devised and the tradeoff between efficient grid resource utilisation and processing latency must be balanced. The lack of actual implementations of distributed geoprocessing systems built using sensor web and grid computing has hindered the development of standards, tools and frameworks in this area. This work provides a contribution to the small number of existing implementations in this field by identifying potential workflow bottlenecks in such systems and gaps in the existing specifications. Furthermore it sets out a typology of real-time geoprocessing operations that are anticipated to facilitate the development of real-time geoprocessing software.EThOS - Electronic Theses Online ServiceEngineering and Physical Sciences Research Council (EPSRC) : School of Civil Engineering & Geosciences, Newcastle UniversityGBUnited Kingdo

    NASA Tech Briefs, November 2002

    Get PDF
    Topics include: a technology focus on engineering materials, electronic components and systems, software, mechanics, machinery/automation, manufacturing, bio-medical, physical sciences, information sciences book and reports, and a special section of Photonics Tech Briefs
    corecore