1,583 research outputs found

    New IR & Ranking Algorithm for Top-K Keyword Search on Relational Databases ‘Smart Search’

    Get PDF
    Database management systems are as old as computers, and the continuous research and development in databases is huge and an interest of many database venders and researchers, as many researchers work in solving and developing new modules and frameworks for more efficient and effective information retrieval based on free form search by users with no knowledge of the structure of the database. Our work as an extension to previous works, introduces new algorithms and components to existing databases to enable the user to search for keywords with high performance and effective top-k results. Work intervention aims at introducing new table structure for indexing of keywords, which would help algorithms to understand the semantics of keywords and generate only the correct CN‟s (Candidate Networks) for fast retrieval of information with ranking of results according to user‟s history, semantics of keywords, distance between keywords and match of keywords. In which a three modules where developed for this purpose. We implemented our three proposed modules and created the necessary tables, with the development of a web search interface called „Smart Search‟ to test our work with different users. The interface records all user interaction with our „Smart Search‟ for analyses, as the analyses of results shows improvements in performance and effective results returned to the user. We conducted hundreds of randomly generated search terms with different sizes and multiple users; all results recorded and analyzed by the system were based on different factors and parameters. We also compared our results with previous work done by other researchers on the DBLP database which we used in our research. Our final result analysis shows the importance of introducing new components to the database for top-k keywords search and the performance of our proposed system with high effective results.نظم إدارة قواعد البيانات قديمة مثل أجيزة الكمبيوتر، و البحث والتطوير المستمر في قواعد بيانات ضخم و ىنالك اىتمام من العديد من مطوري قواعد البيانات والباحثين، كما يعمل العديد من الباحثين في حل وتطوير وحدات جديدة و أطر السترجاع المعمومات بطرق أكثر كفاءة وفعالية عمى أساس نموذج البحث الغير مقيد من قبل المستخدمين الذين ليس لدييم معرفة في بنية قاعدة البيانات. ويأتي عممنا امتدادا لألعمال السابقة، ويدخل الخوارزميات و مكونات جديدة لقواعد البيانات الموجودة لتمكين المستخدم من البحث عن الكممات المفتاحية )search Keyword )مع األداء العالي و نتائج فعالة في الحصول عمى أعمى ترتيب لمبيانات .)Top-K( وييدف ىذا العمل إلى تقديم بنية جديدة لفيرسة الكممات المفتاحية )Table Keywords Index ،)والتي من شأنيا أن تساعد الخوارزميات المقدمة في ىذا البحث لفيم معاني الكممات المفتاحية المدخمة من قبل المستخدم وتوليد فقط الشبكات المرشحة (s’CN (الصحيحة السترجاع سريع لممعمومات مع ترتيب النتائج وفقا ألوزان مختمفة مثل تاريخ البحث لممستخدم، ترتيب الكمات المفتاحية في النتائج والبعد بين الكممات المفتاحية في النتائج بالنسبة لما قام المستخدم بأدخالو. قمنا بأقتراح ثالث مكونات جديدة )Modules )وتنفيذىا من خالل ىذه االطروحة، مع تطوير واجية البحث عمى شبكة اإلنترنت تسمى "البحث الذكي" الختبار عممنا مع المستخدمين. وتتضمن واجية البحث مكونات تسجل تفاعل المستخدمين وتجميع تمك التفاعالت لمتحميل والمقارنة، وتحميالت النتائج تظير تحسينات في أداء استرجاع البينات و النتائج ذات صمة ودقة أعمى. أجرينا مئات عمميات البحث بأستخدام جمل بحث تم أنشائيا بشكل عشوائي من مختمف األحجام، باالضافة الى االستعانة بعدد من المستخدمين ليذه الغاية. واستندت جميع النتائج المسجمة وتحميميا بواسطة واجية البحث عمى عوامل و معايير مختمفة .وقمنا بالنياية بعمل مقارنة لنتائجنا مع االعمال السابقة التي قام بيا باحثون آخرون عمى نفس قاعدة البيانات (DBLP (الشييرة التي استخدمناىا في أطروحتنا. وتظير نتائجنا النيائية مدى أىمية أدخال بنية جديدة لفيرسة الكممات المفتاحية الى قواعد البيانات العالئقية، وبناء خوارزميات استنادا الى تمك الفيرسة لمبحث بأستخدام كممات مفتاحية فقط والحصول عمى نتائج أفضل ودقة أعمى، أضافة الى التحسن في وقت البحث

    Integrated use of technologies and techniques for construction knowledge management

    Get PDF
    The last two decades have witnessed a significant increase in discussions about the different dimensions of knowledge and knowledge management (KM). This is especially true in the construction context. Many factors have contributed to this growing interest including globalisation, increased competition, diffusion of new ICTs (information and communication technologies), and new procurement routes, among others. There are a range of techniques and technologies that can be used for knowledge management (KM) in construction organisations. The use of techniques for KM is not new, but many technologies for KM are fairly new and still evolving. This paper begins with a review of different KM techniques and technologies and then reports the findings of case studies of selected UK construction organisations, carried out with the aim of establishing what tools are currently being used in UK construction organisations to support knowledge processes. Case study findings indicate that most organisations do not adopt a structured approach for selecting KM technologies and techniques. The use of KM techniques is more evident compared to KM technologies. There is also reluctance among construction companies to invest in highly specialised KM technologies. The high costs of specialist KM technologies are viewed as the barrier to their adoption. In conclusion, the paper advocates integrated use of KM techniques and technologies in construction organisations

    Augmenting applications with hyper media, functionality and meta-information

    Get PDF
    The Dynamic Hypermedia Engine (DHE) enhances analytical applications by adding relationships, semantics and other metadata to the application\u27s output and user interface. DHE also provides additional hypermedia navigational, structural and annotation functionality. These features allow application developers and users to add guided tours, personal links and sharable annotations, among other features, into applications. DHE runs as a middleware between the application user interface and its business logic and processes, in a n-tier architecture, supporting the extra functionalities without altering the original systems by means of application wrappers. DHE automatically generates links at run-time for each of those elements having relationships and metadata. Such elements are previously identified using a Relation Navigation Analysis. DHE also constructs more sophisticated navigation techniques not often found on the Web on top of these links. The metadata, links, navigation and annotation features supplement the application\u27s primary functionality. This research identifies element types, or classes , in the application displays. A mapping rule encodes each relationship found between two elements of interest at the class level . When the user selects a particular element, DHE instantiates the commands included in the rules with the actual instance selected and sends them to the appropriate destination system, which then dynamically generates the resulting virtual (i.e. not previously stored) page. DHE executes concurrently with these applications, providing automated link generation and other hypermedia functionality. DHE uses the extensible Markup Language (XMQ -and related World Wide Web Consortium (W3C) sets of XML recommendations, like Xlink, XML Schema, and RDF -to encode the semantic information required for the operation of the extra hypermedia features, and for the transmission of messages between the engine modules and applications. DHE is the only approach we know that provides automated linking and metadata services in a generic manner, based on the application semantics, without altering the applications. DHE will also work with non-Web systems. The results of this work could also be extended to other research areas, such as link ranking and filtering, automatic link generation as the result of a search query, metadata collection and support, virtual document management, hypermedia functionality on the Web, adaptive and collaborative hypermedia, web engineering, and the semantic Web

    Learner models in online personalized educational experiences: an infrastructure and some experim

    Get PDF
    Technologies are changing the world around us, and education is not immune from its influence: the field of teaching and learning supported by the use of Information and Communication Technologies (ICTs), also known as Technology Enhanced Learning (TEL), has witnessed a huge expansion in recent years. This wide adoption happened thanks to the massive diffusion of broadband connections and to the pervasive needs for education, highly connected to the evolution in sciences and technologies. Therefore, it has pushed up the usage of online education (distance and blended methodologies for educational experiences) to, even in lately years, unexpected rates. Alongside with the well known potentialities, digital-based educational tools come with a number of downsides, such as possible disengagement on the part of the learner, absence of the social pressures that normally exist in a classroom environment, difficulty or even inability from the learners to self-regulate and, last but not least, depletion of the stimulus to actively participate and cooperate with lectures and peers. These difficulties impact the teaching process and the outcomes of the educational experience (i.e. learning process), being a serious limit and questioning the broader applicability of TEL solutions. To overcome these issues, there is a need of tools to support the learning process. In the literature, one of the known approach to improve the situation is to rely on a user profile, that collects data during the use of the eLearning platforms or tool. The created profile can be used to adapt the behaviour and the contents proposed to the learner. On top of this model, some researches stressed the positive effects stimulated by the disclosure of the model itself for inspection purposes by the learner. This disclosed model is known as Open Learner Model (OLM). The idea of opening learners' profile and eventually integrate them with external on-line resources is not new and it has the ultimate goal of creating global and long-run indicators of the learner's profile. Also the representation aspect of the learner model plays a role, moving from the more traditional approach based on the textual and analytic/extensive representation to the graphical indicators that are able to summarise and to present one or more of the model characteristics in a way that is considered more effective and natural for the user consumption. Relying on the same learner models, and stressing the different aggregation and representation capabilities, it is possible to either support self-reflection of the learner or to foster the tutoring process to allow proper supervision by the tutor/teacher. Both the objectives can be reached through the graphical representation of the relevant information, presented in different ways. Furthermore, with such an open approach for the learner model, the concepts of personalisation and adaptation acquire a central role in the TEL experience, overcoming the previous limits related to the impossibility to observe and explain to the learner the reasons for such an intervention from the tool itself. As a consequence, the introduction of different tools, platforms, widgets and devices in the learning process, together with the adaptation process based on the learner profiles, can create a personal space for a potential fruitful usage of the rich and widespread amount of resources available to the learner. This work aimed at analysing the way a learner model could be represented in visual presentation to the system users, exploring the effects and performances for learners and teachers. Subsequently, it concentrated in investigating how the adoption of adaptive and social visualisations of OLM could affect the student experience within a TEL context. The motivation was twofold. On one side was to show that the approach of mixing data from heterogeneous and not already related data sources could have a meaningful didactic interpretations, whether on the other one was to measure the perceived impact of the introduction on online experiences of the adaptivity (and of social aspects) in the graphical visualisations produced by such a tool. In order to achieve these objectives, the present work analysed and addressed them through an approach that merged user data in learning platforms, implementing a learner profile. This was accomplished by means of the creation of a tool, named GVIS, to elaborate on the collected user actions in platforms enabling remote teaching. A number of test cases were performed and analysed, adopting the developed tool as the provider to extract, to aggregate and to represent the data for the learners' model. The GVIS tool impact was then estimated with self- evaluation questionnaires, with the analysis of log files and with knowledge quiz results. Dimensions such as the perceived usefulness, the impact on motivation and commitment, the cognitive overload generated, and the impact of social data disclosure were taken into account. The main result found by the application of the developed tool in TEL experiences was to have an impact on the behaviour of online learners when used to provide them with indicators around their activities, especially when enhanced with social capabilities. The effects appear to be amplifies in those cases where the widget usage is as simplified as possible. From the learner side, the results suggested that the learners seem to appreciate the tool and recognise its value. For them the introduction as part of the online learning experience could act as a positive pressure factor, enhanced by the peer comparison functionality. This functionality could also be used to reinforce the student engagement and positive commitment to the educational experience, by transmitting a sense of community and stimulating healthy competition between learners. From the teacher/tutor side, they seemed to be better supported by the presentation of compact, intuitive and just-in-time information (i.e. actions that have an educational interpretation or impact) about the monitored user or group. This gave them a clearer picture of how the class is currently performing and enabled them to address performance issues by adapting the resources and the teaching (and learning) approach accordingly. Although a drawback was identified regarding the cognitive overload, the data collected showed that users generally considered this kind of support useful. There is also indications that further analyses can be interesting to explore the effects introduced in the teaching practices by the availability and usage of such a tool

    Contexts and Contributions: Building the Distributed Library

    Get PDF
    This report updates and expands on A Survey of Digital Library Aggregation Services, originally commissioned by the DLF as an internal report in summer 2003, and released to the public later that year. It highlights major developments affecting the ecosystem of scholarly communications and digital libraries since the last survey and provides an analysis of OAI implementation demographics, based on a comparative review of repository registries and cross-archive search services. Secondly, it reviews the state-of-practice for a cohort of digital library aggregation services, grouping them in the context of the problem space to which they most closely adhere. Based in part on responses collected in fall 2005 from an online survey distributed to the original core services, the report investigates the purpose, function and challenges of next-generation aggregation services. On a case-by-case basis, the advances in each service are of interest in isolation from each other, but the report also attempts to situate these services in a larger context and to understand how they fit into a multi-dimensional and interdependent ecosystem supporting the worldwide community of scholars. Finally, the report summarizes the contributions of these services thus far and identifies obstacles requiring further attention to realize the goal of an open, distributed digital library system

    VAS (Visual Analysis System): An information visualization engine to interpret World Wide Web structure

    Get PDF
    People increasingly encounter problems of interpreting and filtering mass quantities of information. The enormous growth of information systems on the World Wide Web has demonstrated that we need systems to filter, interpret, organize and present information in ways that allow users to use these large quantities of information. People need to be able to extract knowledge from this sometimes meaningful but sometimes useless mass of data in order to make informed decisions. Web users need to have some kind of information about the sort of page they might visit, such as, is it a rarely referenced or often-referenced page? This master\u27s thesis presents a method to address these problems using data mining and information visualization techniques

    DIN Spec 91345 RAMI 4.0 compliant data pipelining: An approach to support data understanding and data acquisition in smart manufacturing environments

    Get PDF
    Today, data scientists in the manufacturing domain are confronted with a set of challenges associated to data acquisition as well as data processing including the extraction of valuable in-formation to support both, the work of the manufacturing equipment as well as the manufacturing processes behind it. One essential aspect related to data acquisition is the pipelining, including various commu-nication standards, protocols and technologies to save and transfer heterogenous data. These circumstances make it hard to understand, find, access and extract data from the sources depend-ing on use cases and applications. In order to support this data pipelining process, this thesis proposes the use of the semantic model. The selected semantic model should be able to describe smart manufacturing assets them-selves as well as to access their data along their life-cycle. As a matter of fact, there are many research contributions in smart manufacturing, which already came out with reference architectures or standards for semantic-based meta data descrip-tion or asset classification. This research builds upon these outcomes and introduces a novel se-mantic model-based data pipelining approach using as a basis the Reference Architecture Model for Industry 4.0 (RAMI 4.0).Hoje em dia, os cientistas de dados no domínio da manufatura são confrontados com várias normas, protocolos e tecnologias de comunicação para gravar, processar e transferir vários tipos de dados. Estas circunstâncias tornam difícil compreender, encontrar, aceder e extrair dados necessários para aplicações dependentes de casos de utilização, desde os equipamentos aos respectivos processos de manufatura. Um aspecto essencial poderia ser um processo de canalisação de dados incluindo vários normas de comunicação, protocolos e tecnologias para gravar e transferir dados. Uma solução para suporte deste processo, proposto por esta tese, é a aplicação de um modelo semântico que descreva os próprios recursos de manufactura inteligente e o acesso aos seus dados ao longo do seu ciclo de vida. Muitas das contribuições de investigação em manufatura inteligente já produziram arquitecturas de referência como a RAMI 4.0 ou normas para a descrição semântica de meta dados ou classificação de recursos. Esta investigação baseia-se nestas fontes externas e introduz um novo modelo semântico baseado no Modelo de Arquitectura de Referência para Indústria 4.0 (RAMI 4.0), em conformidade com a abordagem de canalisação de dados no domínio da produção inteligente como caso exemplar de utilização para permitir uma fácil exploração, compreensão, descoberta, selecção e extracção de dados

    Peer-to-Peer Personal Health Record

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Patients and providers need to exchange medical records. Electronic Health Records and Health Information Exchanges leave a patient’s health record fragmented and controlled by the provider. This thesis proposes a Peer-to-Peer Personal Health Record network that can be extended with third-party services. This design enables patient control of health records and the tracing of exchanges. Additionally, as a demonstration of the functionality of a potential third-party, a Hypertension Predictor is developed using MEPS data and deployed as a service in the proposed framework

    Design and development of financial applications using ontology-based multi-agent systems

    Get PDF
    Researchers in the field of finance now use increasingly sophisticated mathematical models that require intelligent software on high performance computing systems. Agent models to date that are designed for financial markets have their knowledge specified through low level programming that require technical expertise in software, not normally available with finance professionals. Hence there is a need for system development methodologies where domain experts and researchers and can specify the behaviour of the agent applications without any knowledge of the underlying agent software. This paper proposes an approach to achieve the above objectives through the use of ontologies that drive the behaviours of agents. This approach contributes towards the building of semantically aware intelligent services, where ontologies are used rather than low level programming to dictate the characteristics of the agent applications. This approach is expected to allow more extensive usage of multi-agent systems in financial business applications
    corecore