133 research outputs found

    Evolution of Marketing in Smart Cities through the Collaboration Design

    Get PDF
    Our time sees more and more cities striving to grow into smart cities, which makes this market to grow with a considerable pace. However, there are many challenges of these processes such as municipal budgets, disposability of skilled staff, privacy and cyber security concerns, etc. Besides, by the technology-driven smart city development, an essential thing has been lost on the way—the human dimension. While the world has started to recognize this deficiency, the hunt for the right methodology to do better has begun, and so an open run to understand the relations among humans, technology, and society in order to manage their effect on business and economy. This development will eventually enter the perspective of the electoral body of democratic societies, thus influencing public policy. It will provide the room to a new equilibrium within the triad: people, businesses, and public policy. Being close to the population and their everyday needs (smart), cities will no doubt act as a push factor to these developments. Propelled with technology change and new values, the private-public-people partnerships (PPPP) will earn the pace. The communicators, bringing new relationship to life, are in this way challenged by metadesign: designing for the “new” designer(s)—the empowered end user. Therefore, for the communicators, the next challenge for marketing in smart cities is the creation of tools and methodologies for the new forms of the collaboration design. After presenting the unique factors that are driving the growth of smart cities in different parts of the world, authors identify important challenges that still need to be overcome in different markets. Special focus will be given on the discussion of contemporary challenges of public policy seen through smart cities development, which by requiring new marketing design is exercising pressure on public policy. Smart cities marketing design will be discussed from the perspective of the need to hear human needs, and at the same time to support the functionality of the 4Ps. Its concrete role will be in bringing understanding of the need for collaboration, which can reduce costs of public policy, thus enlarging benefits of collective action in smart cities

    Trends in pediatric-adjusted shock index predict morbidity in children with moderate blunt injuries

    Get PDF
    Purpose Trending the pediatric-adjusted shock index (SIPA) after admission has been described for children suffering severe blunt injuries (i.e., injury severity score (ISS) ≥ 15). We propose that following SIPA in children with moderate blunt injuries, as defined by ISS 10–14, has similar utility. Methods The trauma registry at a single institution was queried over a 7 year period. Patients were included if they were between 4 and 16 years old at the time of admission, sustained a blunt injury with an ISS 10–14, and were admitted less than 12 h after their injury (n = 501). Each patient’s SIPA was then calculated at 0, 12, 24, 36, and 48 h (h) after admission and then categorized as elevated or normal at each time frame based on previously reported values. Trends in outcome variables as a function of time from admission for patients with an abnormal SIPA to normalize as well as patients with a normal admission SIPA to abnormal were analyzed. Results In patients with a normal SIPA at arrival, elevation within the first 24 h of admission correlated with increased length of stay (LOS). Increased transfusion requirement, incidence of infectious complications, and need for in-patient rehabilitation were also seen in analyzed sub-groups. An elevated SIPA at arrival with increased length of time to normalize SIPA correlated with increased length of stay LOS in the entire cohort and in those without head injury, but not in patients with a head injury. No deaths occurred within the study cohort. Conclusions Patients with an ISS 10–14 and a normal SIPA at time of arrival who then have an elevated SIPA in the first 24 h of admission are at increased risk for morbidity including longer LOS and infectious complications. Similarly, time to normalize an elevated admission SIPA appears to directly correlate with LOS in patients without head injuries. No correlations with markers for morbidity could be identified in patients with a head injury and an elevated SIPA at arrival. This may be due to small sample size, as there were no relations to severity of head injury as measured by head abbreviated injury scale (head AIS) and the outcome variables reported. This is an area of ongoing analysis. This study extends the previously reported utility of following SIPA after admission into milder blunt injuries

    Tourism 4.0: Challenges in Marketing a Paradigm Shift

    Get PDF
    Since the early beginnings people have been traveling and tourism industry has been always adapting to the social and technological development. In the era of digitalization, it needs to adapt again. Around 1.3 billion persons are traveling yearly around the world. Thus, a small change in this sector has a huge impact on the whole society. We propose a new paradigm, Tourism 4.0, appearing with the quest to unlock the innovation potential in the whole tourism sector. This will be done with the help of key enabling technologies from the Industry 4.0, such as Internet of Things, Big Data, Blockchain, Artificial Intelligence, Virtual Reality and Augmented Reality. By establishing a collaborative ecosystem involving local inhabitants, local authority, tourists, service providers and government, we can co-create an enriched tourism experience in both the physical and the digital world. With this, we can shift from tourist-centered focus to a tourism-centered focus around the local community. Who is the consumer in this new paradigm of tourism and what is the role of marketing in a paradigm shift? The chapter will analyze the current development and present the main shifts due to it

    Garbling, Stacked and Staggered: Faster k-out-of-n Garbled Function Evaluation

    Get PDF
    Stacked Garbling (SGC) is a Garbled Circuit (GC) improvement that efficiently and securely evaluates programs with conditional branching. SGC reduces bandwidth consumption such that communication is proportional to the size of the single longest program execution path, rather than to the size of the entire program. Crucially, the parties expend increased computational effort compared to classic GC. Motivated by procuring a subset in a menu of computational services or tasks, we consider GC evaluation of k-out-of-n branches, whose indices are known (or eventually revealed) to the GC evaluator E. Our stack-and-stagger technique amortizes GC computation in this setting. We retain the communication advantage of SGC, while significantly improving computation and wall-clock time. Namely, each GC party garbles (or evaluates) the total of n branches, a significant improvement over the O(nk) garblings/evaluations needed by standard SGC. We present our construction as a garbling scheme. Our technique brings significant overall performance improvement in various settings, including those typically considered in the literature: e.g. on a 1Gbps LAN we evaluate 16-out-of-128 functions ~7.68x faster than standard stacked garbling

    MOTIF: (Almost) Free Branching in GMW via Vector-Scalar Multiplication

    Get PDF
    MPC functionalities are increasingly specified in high-level languages, where control-flow constructions such as conditional statements are extensively used. Today, concretely efficient MPC protocols are circuit-based and must evaluate all conditional branches at high cost to hide the taken branch. The Goldreich-Micali-Wigderson, or GMW, protocol is a foundational circuit-based technique that realizes MPC for p players and is secure against up to p - 1 semi-honest corruptions. While GMW requires communication rounds proportional to the computed circuit’s depth, it is effective in many natural settings. Our main contribution is MOTIF (Minimizing OTs for IFs), a novel GMW extension that evaluates conditional branches almost for free by amortizing Oblivious Transfers (OTs) across branches. That is, we simultaneously evaluate multiple independent AND gates, one gate from each mutually exclusive branch, by representing them as a single cheap vector-scalar multiplication (VS) gate. For 2PC with b branches, we simultaneously evaluate up to b AND gates using only two 1-out-of-2 OTs of b-bit secrets. This is a factor ~b improvement over the state-of-the-art 2b 1-out-of-2 OTs of 1-bit secrets. Our factor b improvement generalizes to the multiparty setting as well: b AND gates consume only p(p - 1) 1-out-of-2 OTs of b-bit secrets. We implemented our approach and report its performance. For 2PC and a circuit with 16 branches, each comparing two length-65000 bitstrings, MOTIF outperforms standard GMW in terms of communication by ~9.4x. Total wall-clock time is improved by 4.1 - 9.2x depending on network settings. Our work is in the semi-honest model, tolerating all-but-one corruptions

    Masked Triples: Amortizing Multiplication Triples across Conditionals

    Get PDF
    A classic approach to MPC uses preprocessed multiplication triples to evaluate arbitrary Boolean circuits. If the target circuit features conditional branching, e.g. as the result of a IF program statement, then triples are wasted: one triple is consumed per AND gate, even if the output of the gate is entirely discarded by the circuit\u27s conditional behavior. In this work, we show that multiplication triples can be re-used across conditional branches. For a circuit with bb branches, each having nn AND gates, we need only a total of nn triples, rather than the typically required bnb\cdot n. Because preprocessing triples is often the most expensive step in protocols that use them, this significantly improves performance. Prior work similarly amortized oblivious transfers across branches in the classic GMW protocol (Heath et al., Asiacrypt 2020, [HKP20]). In addition to demonstrating conditional improvements are possible for a different class of protocols, we also concretely improve over [HKP20]: their maximum improvement is bounded by the topology of the circuit. Our protocol yields improvement independent of topology: we need triples proportional to the size of the program\u27s longest execution path, regardless of the structure of the program branches. We implemented our approach in C++. Our experiments show that we significantly improve over a naive protocol and over prior work: for a circuit with 1616 branches and in terms of total communication, we improved over naive by 12×12\times and over [HKP20] by an average of 2.6×2.6\times. Our protocol is secure against the semi-honest corruption of p1p-1 parties

    Trends in pediatric adjusted shock index predict morbidity and mortality in children with severe blunt injuries

    Get PDF
    Purpose The utility of measuring the pediatric adjusted shock index (SIPA) at admission for predicting severity of blunt injury in pediatric patients has been previously reported. However, the utility of following SIPA after admission is not well described. Methods The trauma registry from a level-one pediatric trauma center was queried from January 1, 2010 to December 31, 2015. Patients were included if they were between 4 and 16 years old at the time of admission, sustained a blunt injury with an Injury Severity Score ≥ 15, and were admitted less than 12 h after their injury (n = 286). Each patient's SIPA was then calculated at 0, 12, 24, 36, and 48 h after admission and then categorized as elevated or normal at each time frame based upon previously reported values. Trends in outcome variables as a function of time from admission for patients with an abnormal SIPA to normalize as well as patients with a normal admission SIPA to abnormal were analyzed. Results In patients with a normal SIPA at arrival, 18.4% of patients who developed an elevated SIPA at 12 h after admission died, whereas 2.4% of patients who maintained a normal SIPA throughout the first 48 h of admission died (p < 0.01). Among patients with an elevated SIPA at arrival, increased length of time to normalize SIPA correlated with increased length of stay (LOS) and intensive care unit (ICU) LOS. Similarly, elevation of SIPA after arrival in patients with a normal initial SIPA correlated to increased LOS and ICU LOS. Conclusions Patients with a normal SIPA at time of arrival who then have an elevated SIPA in the first 24 h of admission are at increased risk for morbidity and mortality compared to those whose SIPA remains normal throughout the first 48 h of admission. Similarly, time to normalize an elevated admission SIPA appears to directly correlate with LOS, ICU LOS, and other markers of morbidity across a mixed blunt trauma population. Whether trending SIPA early in the hospital course serves only as a marker for injury severity or if it has utility as a resuscitation metric has not yet been determined

    Peripheral Progenitor Cell Graft in the Rat: A Technique of Graft Processing

    Get PDF
    The aim of this study was to establish a procedure for blood progenitor cell graft processing in rats.  As a first step the mobilization protocol was optimized. The second step was dedicated to define the optimal  source for subsequent graft manufacturing: either peripheral blood or spleen. The third step was  designed to establish a protocol for purification of stem cells. The best mobilization results in terms of white blood cell count, granulocyte colony forming units (CFUG)  and CD90 positive progenitor cells were obtained after pre-treatment of the donors for 5 days with  recombinant human granulocyte colony stimulating factor (100 μg/kg) in combination with murine stem  cell factor (33 μg/kg). Splenectomy prior to mobilization increased the yield of stem cells from peripheral blood. The numbers of  CD90-positive progenitor cells recovered from the spleen of one rat after stem cell mobilization were sufficient  to generate one stem cell graft. Grafts containing 1 x 106 progenitor cells – and thus sufficient for transplantation - were obtained after Tcell  depletion and positive selection of CD90 positive cells. The grafts were characterized and showed a  purity exceeding 70%, a T-cell depletion of 3.6 log10 and a 3-fold increase in CFU-G compared to the yield  post mobilization.

    Logstar: Efficient Linear* Time Secure Merge

    Get PDF
    Secure merge considers the problem of combining two sorted lists into a single sorted secret-shared list. Merge is a fundamental building block for many real-world applications. For example, secure merge can implement a large number of SQL-like database joins, which are essential for almost any data processing task such as privacy-preserving fraud detection, ad conversion rates, data deduplication, and many more. We present two constructions with communication bandwidth and rounds tradeoff. Logstar, our bandwidth-optimized construction, takes inspiration from Falk and Ostrovsky (ITC, 2021) and runs in O(nlogn)O(n\log^*n) time and communication with O(logn)O(\log n) rounds. In particular, for all conceivable nn, the logn\log^*n factor will be equal to the constant 22, and therefore we achieve a near-linear running time. Median, our rounds-optimized construction, builds on the classic parallel medians-based insecure merge approach of Valiant (SIAM J. Comput., 1975), later explored in the secure setting by Blunk et al. (2022), and requires O(nlogcn)O(n \log^c n), 1<c<21<c<2, communication with O(loglogn)O(\log \log n) rounds. We introduce two additional constructions that merge input lists of different sizes. SquareRootMerge, merges lists of sizes n12n^{\frac{1}{2}} and nn, and runs in O(n)O(n) time and communication with O(logn)O(\log n) rounds. CubeRootMerge is closely inspired by Blunk et al.\u27s (2022) construction and merges lists of sizes n13n^{\frac{1}{3}} and nn. It runs in O(n)O(n) time and communication with O(1)O(1) rounds. We optimize our constructions for concrete efficiency. Today, concretely efficient secure merge protocols rely on standard techniques such as GMW or generic sorting. These approaches require an O(nlogn)O(n \log n) size circuit of O(logn)O(\log n) depth. In contrast, our constructions are more efficient and also achieve superior asymptotics. We benchmark our constructions and obtain significant improvements. For example, Logstar reduces bandwidth costs 3.3×\approx 3.3\times and Median reduces rounds 2.22×\approx2.22\times

    Fast ORAM with Server-aided Preprocessing and Pragmatic Privacy-Efficiency Trade-off

    Get PDF
    Data-dependent accesses to memory are necessary for many real-world applications, but their cost remains prohibitive in secure computation. Prior work either focused on minimizing the need for data-dependent access in these applications, or reduced its cost by improving oblivious RAM for secure computation (SC-ORAM). Despite extensive efforts to improve SC-ORAM, the most concretely efficient solutions still require 0.7\approx0.7s per access to arrays of 2302^{30} entries. This plainly precludes using MPC in a number of settings. In this work, we take a pragmatic approach, exploring how concretely cheap MPC RAM access could be made if we are willing to allow one of the participants to learn the access pattern. We design a highly efficient Shared-Output Client-Server ORAM (SOCS-ORAM) that has constant overhead, uses one round-trip of interaction per access, and whose access cost is independent of array size. SOCS-ORAM is useful in settings with hard performance constraints, where one party in the computation is more trust-worthy and is allowed to learn the RAM access pattern. Our SOCS-ORAM is assisted by a third helper party that helps initialize the protocol and is designed for the honest-majority semi-honest corruption model. We implement our construction in C++ and report its performance. For an array of length 2302^{30} with 44B entries, we communicate 1313B per access and take essentially no overhead beyond network latency
    corecore