In-network computing is an emerging computing paradigm that promises significant potentials in continuing to accelerate datacenter applications in the post-Dennard scaling era. With the emergence of various programmable NICs and switches, a plethora of ideas have been proposed to offload functions that used to run on the CPU to those network devices. Like traditional hardware in the datacenter infrastructure, such as the CPU and DRAM, programmable network devices too carry limited resources that need to be shared, which requires a suite of policies, mechanisms and application interfaces. We argue that a practical approach to enable shared in-network computing is for this suite to not only operate within the existing norms of the infrastructure, but also facilitate applications to harness the potentials of in-network computing; and prior works generally fall short through this lens. In this light, this dissertation presents three novel systems. Slingshot leverages in-network computing to enhance the availability of the virtualized radio access network (vRAN), an existing mission-critical application. Slingshot transparently works with the vRAN application with a carefully designed interface that bridges in-network computing and the application. Yama features a set of mechanisms that provide performance isolation across high-level entities, such as tenants and users, for already-deployed black-box offloads. MTP is a transport with revamped interfaces and mechanisms to enable various message operations of application-level offloaded processing, with which existing transports are fundamentally incompatible. We prototype all of these systems and evaluate their efficacy using combinations of end-to-end benchmarks with real-world applications, microbenchmarks with emulated offloads and synthetic workloads, as well as large-scale simulation. We argue that the ideas in these individual systems can be fused together, which makes a solid step towards practically enabling in-network computing in datacenters.Computer Science