85,534 research outputs found
Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms
The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications
Effervescent Breakup and Combustion of Liquid Fuels: Experiment and Modelling
Tato práce se zaměřuje na oblast effervescentnĂch sprejĹŻ a jejich aplikace na kapalnĂ© spalovánĂ s dĹŻrazem na prĹŻmyslovĂ© spalovacĂ komory. Oba aspekty – modelovánĂ a experiment – jsou Ĺ™ešeny. Práce obsahuje obecnĂ˝ Ăşvod, ve kterĂ©m jsou vysvÄ›tleny základnĂ jevy rozpadu kapaliny a vĂĹ™ivĂ©ho spalovánĂ a dále je pĹ™edstavena effervescentnĂ atomizace. PotĂ© jsou popsány pouĹľitĂ© experimentálnĂ postupy jak pro měřenĂ spreje, tak pro měřenĂ tepelnĂ˝ch tokĹŻ do stÄ›n pĹ™i spalovánĂ. V následujĂcĂ kapitole jsou popsány numerickĂ© modely a jejich podstata je vysvÄ›tlena. Jsou zde uvedeny modely pro rozpad spreje, turbulenci a spalovánĂ pouĹľitĂ© bÄ›hem vĂ˝zkumu. VlastnĂ vĂ˝sledky práce jsou uvedeny formou samostatnĂ˝ch ÄŤlánkĹŻ (vydanĂ˝ch nebo pĹ™ijatĂ˝ch) s dodateÄŤnou částĂ vÄ›novanou nepublikovanĂ˝m relevantnĂm vĂ˝sledkĹŻm. Bylo zjištÄ›no, Ĺľe standardnĂ modely sprejĹŻ jsou do jistĂ© mĂry schopny popsat effervescentnĂ spreje. NicmĂ©nÄ› aby bylo moĹľnĂ© predikovat plamen kapalnĂ©ho spreje, jsou zapotĹ™ebĂ detailnÄ›jšà modely sprejĹŻ, kterĂ© dokážà pĹ™esnÄ› zachytit zmÄ›nu prĹŻmÄ›rĹŻ kapek v radiálnĂm a axiálnĂm smÄ›ru. ExperimentálnĂ měřenĂ effervescentnĂch sprejĹŻ bylo provedeno pomocĂ navrhnutĂ© metodiky. VĂ˝sledky měřenĂ byly analyzovány s dĹŻrazem na radiálnĂ a axiálnĂ vĂ˝voj prĹŻmÄ›rĹŻ kapek a nÄ›kterĂ© novĂ© jevy byly popsány. NepĹ™Ămá ĂşmÄ›rnost mezi gas-liquid-ratio a stĹ™ednĂm prĹŻmÄ›rem kapek byla potvrzena. Dále by popsán jev, kdy pro rĹŻznĂ© axiálnĂ vzdálenosti kterĂ© dojde k ĂşplnĂ©mu pĹ™evrácenĂ závislosti stĹ™ednĂho prĹŻmÄ›ru na axiálnĂ vzdálenosti. V závÄ›ru je uvedeno shrnutĂ, kterĂ© rekapituluje hlavnĂ vĂ˝sledkĹŻ a závÄ›ry. V závÄ›reÄŤnĂ˝ch poznámkách je nastĂnÄ›n moĹľnĂ˝ budoucĂ postup. ExperimentálnĂ data pro ověřovánĂ budoucĂch effervescentnĂch modelĹŻ jsou poskytnuta.This thesis presents an investigation of effervescent sprays and their application to spray combustion with emphasis on large-scale combustors. Both aspects – modelling and experiment – are addressed. The thesis contains a general introductory part, where underlying phenomena of spray forming and turbulent combustion are explained and effervescent atomization is presented. Then, adopted experimental approaches are described both for the spray measurement and for the measurement of wall heat fluxes during combustion experiments. In the following chapter numerical models and their philosophy is discussed. Models for spray formation, turbulence and combustion adopted during the research are introduced and explained. The actual results of the thesis are presented in form of separate papers (published or accepted for publication) with an additional section devoted to unpublished relevant results. It is found that standard spray models can to some extent represent effervescent sprays. However, in order to predict a spray flame more detailed spray models are needed in order to describe accurately radial and axial variations of drop sizes. Numerous experimental measurements of effervescent sprays are performed using a proposed methodology. Drop size data are analysed with emphasis on radial and axial drop size evolutions and some new phenomena are described. The inverse relationship between gas-liquid-ratio and mean diameter has been confirmed. Moreover a complete reversal in radial mean diameter trends for various axial locations has been described. Finally, a result summary is put forward that recapitulates the main accomplishments and conclusions. In the closing remarks possible future research is outlined. Experimental data for future effervescent model validations are disclosed.
A user-oriented network forensic analyser: the design of a high-level protocol analyser
Network forensics is becoming an increasingly important tool in the investigation of cyber and computer-assisted crimes. Unfortunately, whilst much effort has been undertaken in developing computer forensic file system analysers (e.g. Encase and FTK), such focus has not been given to Network Forensic Analysis Tools (NFATs). The single biggest barrier to effective NFATs is the handling of large volumes of low-level traffic and being able to exact and interpret forensic artefacts and their context – for example, being able extract and render application-level objects (such as emails, web pages and documents) from the low-level TCP/IP traffic but also understand how these applications/artefacts are being used. Whilst some studies and tools are beginning to achieve object extraction, results to date are limited to basic objects. No research has focused upon analysing network traffic to understand the nature of its use – not simply looking at the fact a person requested a webpage, but how long they spend on the application and what interactions did they have with whilst using the service (e.g. posting an image, or engaging in an instant message chat). This additional layer of information can provide an investigator with a far more rich and complete understanding of a suspect’s activities. To this end, this paper presents an investigation into the ability to derive high-level application usage characteristics from low-level network traffic meta-data. The paper presents a three application scenarios – web surfing, communications and social networking and demonstrates it is possible to derive the user interactions (e.g. page loading, chatting and file sharing ) within these systems. The paper continues to present a framework that builds upon this capability to provide a robust, flexible and user-friendly NFAT that provides access to a greater range of forensic information in a far easier format
Recommended from our members
FABilT – finding answers in a billion triples
This submission presents the application of two coupled systems to the Billion Triples Challenge. The first system (Watson) provides the infrastructure which allows the second one (PowerAqua) to pose natural language queries to the billion triple datasets. Watson is a gateway to the Semantic Web: it crawls and indexes semantic data online to provide a variety of access mechanisms for human users and applications.We show here how we indexed most of the datasets provided for the challenge, thus obtaining an infrastructure (comprising web services, API, web interface, etc.) which supports the exploration of these datasets and makes them available to any Watson-based application. PowerAqua is an open domain question answering system which allows users to pose natural language queries to large scale collections of heterogeneous semantic data. In this paper, we discuss the issues we faced in configuring
PowerAqua and Watson for the challenge and report on our results. The system composed of Watson and PowerAqua, and applied to the Billion Triples Challenge, is called FABilT
Simulating the weak death of the neutron in a femtoscale universe with near-Exascale computing
The fundamental particle theory called Quantum Chromodynamics (QCD) dictates
everything about protons and neutrons, from their intrinsic properties to
interactions that bind them into atomic nuclei. Quantities that cannot be fully
resolved through experiment, such as the neutron lifetime (whose precise value
is important for the existence of light-atomic elements that make the sun shine
and life possible), may be understood through numerical solutions to QCD. We
directly solve QCD using Lattice Gauge Theory and calculate nuclear observables
such as neutron lifetime. We have developed an improved algorithm that
exponentially decreases the time-to solution and applied it on the new CORAL
supercomputers, Sierra and Summit. We use run-time autotuning to distribute GPU
resources, achieving 20% performance at low node count. We also developed
optimal application mapping through a job manager, which allows CPU and GPU
jobs to be interleaved, yielding 15% of peak performance when deployed across
large fractions of CORAL.Comment: 2018 Gordon Bell Finalist: 9 pages, 9 figures; v2: fixed 2 typos and
appended acknowledgement
High-Performance Cloud Computing: A View of Scientific Applications
Scientific computing often requires the availability of a massive number of
computers for performing large scale experiments. Traditionally, these needs
have been addressed by using high-performance computing solutions and installed
facilities such as clusters and super computers, which are difficult to setup,
maintain, and operate. Cloud computing provides scientists with a completely
new model of utilizing the computing infrastructure. Compute resources, storage
resources, as well as applications, can be dynamically provisioned (and
integrated within the existing infrastructure) on a pay per use basis. These
resources can be released when they are no more needed. Such services are often
offered within the context of a Service Level Agreement (SLA), which ensure the
desired Quality of Service (QoS). Aneka, an enterprise Cloud computing
solution, harnesses the power of compute resources by relying on private and
public Clouds and delivers to users the desired QoS. Its flexible and service
based infrastructure supports multiple programming paradigms that make Aneka
address a variety of different scenarios: from finance applications to
computational science. As examples of scientific computing in the Cloud, we
present a preliminary case study on using Aneka for the classification of gene
expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape
Aiming for ultra-scalable ePortfolio distribution using peer-to-peer networks
In this paper the authors discuss how peer-to-peer technology offers a practical solution to building highly scalable Europe-wide and worldwide ePortfolio networks over existing network infrastructures.This solution also offers the effect of empowering individuals through moving the management and storage responsibilities onto the portfolio owners, decoupling users from any single institutional ePortfolio service provider The authors do not present this solution as the single way forward, but as an alternative to what is seen as a mainly client-server and Web-based approach to ePortfolio development, and to encourage developers to explore the possibilities for ePortfolio integration with emerging and relatively immature technologies. A prototype implementation is reported and future developments described
Creating Space: Building Digital Games
Studies of games, rhetoric, and pedagogy are increasingly common in our field, and indeed seem to grow each year. Nonetheless, composing and designing digital games, either as a mode of scholarship or as a classroom assignment, has not seen an equal groundswell. This selection first provides a brief overview of the existing scholarship in gaming and pedagogy, much of which currently focuses either on games as texts to analyze or as pedagogical models. While these approaches are certainly valuable, I advocate for an increased focus on game design and creation as valuable act of composition. Such a focus engages students and scholars in a deeply multimodal practice that incorporates critical design and computational thinking. I close with suggestions on tools for new and intrepid designers
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
- …