15,776 research outputs found
Recommended from our members
Partners with clinical practice: Evaluating the student and staff experiences of on-line continuing professional development for qualified nephrology practitioners
The inclusion of online learning technologies into the higher education (HE) curriculum is frequently associated with the design and development of new models of learning. One could argue that e-learning even demands a reconfiguration of traditional methods of learning and teaching. However, this transformation in pedagogic methodology does not just impact on lecturers and teachers alone. Online learning has âpervasive impacts and changes in other HE functionsâ (HEFCE, p.2). Thus, e-learning is a transformational process that posits new challenges for staff and students, both in educational methods and support.
Many political, clinical, financial and social influences impact on registered health professionalsâ ability to continue their professional development. This is particularly pertinent in the delivery of nephrology care.
In order to evaluate the programme that has now run for 2 years in the context of this institution, evaluative research methodology sought to explore the experiences of the staff and students involved. Qualitative data was collected from the students and a reflective framework was used to form the basis of a focus group for the staff.
This paper will present how a virtual learning environment (VLE) was developed utilising the pedagogic framework of solution-focused learning. It will demonstrate evaluation of the studentsâ experiences compared to their traditional classroom-learning counterparts, and highlight the reflections of staff developers as they moved into new roles and developed different aspects of their present roles within a traditional HE context
Obfuscation-based malware update: A comparison of manual and automated methods
IndexaciĂłn: Scopus; Web of Science.This research presents a proposal of malware classification and its update based on capacity and obfuscation. This article is an extension of [4]a, and describes the procedure for malware updating, that is, to take obsolete malware that is already detectable by antiviruses, update it through obfuscation techniques and thus making it undetectable again. As the updating of malware is generally performed manually, an automatic solution is presented together with a comparison from the standpoint of cost and processing time. The automated method proved to be more reliable, fast and less intensive in the use of resources, specially in terms of antivirus analysis and malware functionality checking times.http://univagora.ro/jour/index.php/ijccc/article/view/2961/112
Always in control? Sovereign states in cyberspace
For well over twenty years, we have witnessed an intriguing debate about the nature of cyberspace. Used for everything from communication to commerce, it has transformed the way individuals and societies live. But how has it impacted the sovereignty of states? An initial wave of scholars argued that it had dramatically diminished centralised control by states, helped by a tidal wave of globalisation and freedom. These libertarian claims were considerable. More recently, a new wave of writing has argued that states have begun to recover control in cyberspace, focusing on either the police work of authoritarian regimes or the revelations of Edward Snowden. Both claims were wide of the mark. By contrast, this article argues that we have often misunderstood the materiality of cyberspace and its consequences for control. It not only challenges the libertarian narrative of freedom, it suggests that the anarchic imaginary of the Internet as a âWild Westâ was deliberately promoted by states in order to distract from the reality. The Internet, like previous forms of electronic connectivity, consists mostly of a physical infrastructure located in specific geographies and jurisdictions. Rather than circumscribing sovereignty, it has offered centralised authority new ways of conducting statecraft. Indeed, the Internet, high-speed computing, and voice recognition were all the result of security research by a single information hegemon and therefore it has always been in control
Recommended from our members
Blurring the boundaries? Supporting students and staff within an online learning environment
The inclusion of online learning technologies into the higher education (HE) curriculum is frequently associated with the design and development of new models of learning. One could argue that e-learning even demands a reconfiguration of traditional methods of learning and teaching. One of the key elements of this transformational process is flexibility. This paper considers a number of aspects relating to the flexibility inherent within models of online learning and the potential impact of this on support structures. City University, London, is used as a case study to provide examples of online practice which support strategies outlined here. A number of models of online learning are used at the University to provide evidence of the variation in modes of support and illustrate the different needs of both students and staff when using these forms of learning. What is apparent through this discussion is that to provide effective support for online learners, whether students or staff, clear and solid structures need to be put in place to assist with the creation of an online community
Flow of emotional messages in artificial social networks
Models of message flows in an artificial group of users communicating via the
Internet are introduced and investigated using numerical simulations. We
assumed that messages possess an emotional character with a positive valence
and that the willingness to send the next affective message to a given person
increases with the number of messages received from this person. As a result,
the weights of links between group members evolve over time. Memory effects are
introduced, taking into account that the preferential selection of message
receivers depends on the communication intensity during the recent period only.
We also model the phenomenon of secondary social sharing when the reception of
an emotional e-mail triggers the distribution of several emotional e-mails to
other people.Comment: 10 pages, 7 figures, submitted to International Journal of Modern
Physics
Building communities for the exchange of learning objects: theoretical foundations and requirements
In order to reduce overall costs of developing high-quality digital courses (including both the content, and the learning and teaching activities), the exchange of learning objects has been recognized as a promising solution. This article makes an inventory of the issues involved in the exchange of learning objects within a community. It explores some basic theories, models and specifications and provides a theoretical framework containing the functional and non-functional requirements to establish an exchange system in the educational field. Three levels of requirements are discussed. First, the non-functional requirements that deal with the technical conditions to make learning objects interoperable. Second, some basic use cases (activities) are identified that must be facilitated to enable the technical exchange of learning objects, e.g. searching and adapting the objects. Third, some basic use cases are identified that are required to establish the exchange of learning objects in a community, e.g. policy management, information and training. The implications of this framework are then discussed, including recommendations concerning the identification of reward systems, role changes and evaluation instruments
The Enigma of Digitized Property A Tribute to John Perry Barlow
Compressive Sensing has attracted a lot of attention over the last decade within the areas of applied mathematics, computer science and electrical engineering because of it suggesting that we can sample a signal under the limit that traditional sampling theory provides. By then using dierent recovery algorithms we are able to, theoretically, recover the complete original signal even though we have taken very few samples to begin with. It has been proven that these recovery algorithms work best on signals that are highly compressible, meaning that the signals can have a sparse representation where the majority of the signal elements are close to zero. In this thesis we implement some of these recovery algorithms and investigate how these perform practically on a real video signal consisting of 300 sequential image frames. The video signal will be under sampled, using compressive sensing, and then recovered using two types of strategies, - One where no time correlation between successive frames is assumed, using the classical greedy algorithm Orthogonal Matching Pursuit (OMP) and a more robust, modied OMP called Predictive Orthogonal Matching Pursuit (PrOMP). - One newly developed algorithm, Dynamic Iterative Pursuit (DIP), which assumes and utilizes time correlation between successive frames. We then performance evaluate and compare these two strategies using the Peak Signal to Noise Ratio (PSNR) as a metric. We also provide visual results. Based on investigation of the data in the video signal, using a simple model for the time correlation and transition probabilities between dierent signal coecients in time, the DIP algorithm showed good recovery performance. The main results showed that DIP performed better and better over time and outperformed the PrOMP up to a maximum of 6 dB gain at half of the original sampling rate but performed slightly below the PrOMP in a smaller part of the video sequence where the correlation in time between successive frames in the original video sequence suddenly became weaker.Compressive sensing har blivit mer och mer uppmarksammat under det senaste decenniet inom forskningsomraden sasom tillampad matematik, datavetenskap och elektroteknik. En stor anledning till detta ar att dess teori innebar att det blir mojligt att sampla en signal under gransen som traditionell samplingsteori innebar. Genom att sen anvanda olika aterskapningsalgoritmer ar det anda teoretiskt mojligt att aterskapa den ursprungliga signalen. Det har visats sig att dessaaterskapningsalgoritmer funkar bast pa signaler som ar mycket kompressiva, vilket innebar att dessa signaler kan representeras glest i nagon doman dar merparten av signalens koecienter ar nara 0 i varde. I denna uppsats implementeras vissa av dessaaterskapningsalgoritmer och vi undersoker sedan hur dessa presterar i praktiken pa en riktig videosignal bestaende av 300 sekventiella bilder. Videosignalen kommer att undersamplas med compressive sensing och sen aterskapas genom att anvanda 2 typer av strategier, - En dar ingen tidskorrelation mellan successiva bilder i videosignalen antas genom att anvanda klassiska algoritmer sasom Orthogonal Matching Pursuit (OMP) och en mer robust, modierad OMP : Predictive Orthogonal Matching Pursuit (PrOMP). - En nyligen utvecklad algoritm, Dynamic Iterative Pursuit (DIP), som antar och nyttjar en tidskorrelation mellan successiva bilder i videosignalen. Vi utvarderar och jamfor prestandan i dessa tva olika typer av strategier genom att anvanda Peak Signal to Noise Ratio (PSNR) som jamforelseparameter. Vi ger ocksa visuella resultat fran videosekvensen. Baserat pa undersokning av data i videosignalen visade det sig, genom att anvanda enkla modeller, bade for tidskorrelationen och sannolikhetsfunktioner for vilka koecienter som ar aktiva vid varje tidpunkt, att DIP algoritmen visade battre prestanda an de tva andra tidsoberoende algoritmerna under visa tidsekvenser. Framforallt de sekvenser dar videosignalen inneholl starkare korrelation i tid. Som mest presterade DIP upp till 6 dB battre an OMP och PrOMP
Recommended from our members
The effects of progressive levels of 3d authenticity antecedents and consequences on consumersâ virtual experience
This study investigates the effects of authentic three dimensional (3D) product visualisation antecedents on 3D authenticity, and the effects of 3D authenticity consequences on consumersâ virtual experience. A hypothetical retailer Web site presents a variety of laptops for the within-subjects laboratory experiments. In a first experiment, a one-way ANOVA compares telepresence and authenticity scores. The second experiment uses two-way repeated measures ANOVA to determine the effects of the progressive levels of the antecedents on 3D authenticity. In a third experiment, two-way repeated measures ANOVA determine the effects of the progressive levels of 3D authenticity consequences on willingness to purchase. The results show that authenticity is more useful than telepresence in simulating consumersâ virtual experience. The high levels of control and animated colours lead to higher authenticity for the site. In addition, the high levels of 3D utilitarian and hedonic constructs enhance willingness to purchase from the online retailer
- âŠ