The ubiquity of the Internet and the widespread proliferation of electronic devices has resulted in flourishing microtask
crowdsourcing marketplaces, such as Amazon MTurk. An aspect that has remained largely invisible in microtask crowdsourcing
is that of work environments; defined as the hardware and software affordances at the disposal of crowd workers which are used
to complete microtasks on crowdsourcing platforms. In this paper, we reveal the significant role of work environments in the
shaping of crowd work. First, through a pilot study surveying the good and bad experiences workers had with UI elements in
crowd work, we revealed the typical issues workers face. Based on these findings, we then deployed over 100 distinct microtasks
on CrowdFlower, addressing workers in India and USA in two identical batches. These tasks emulate the good and bad UI
element designs that characterize crowdsourcing microtasks. We recorded hardware specifics such as CPU speed and device
type, apart from software specifics including the browsers used to complete tasks, operating systems on the device, and other
properties that define the work environments of crowd workers. Our findings indicate that crowd workers are embedded in a
variety of work environments which influence the quality of work produced. To confirm and validate our data-driven findings we
then carried out semi-structured interviews with a sample of Indian and American crowd workers from this platform. Depending
on the design of UI elements in microtasks, we found that some work environments are more suitable than others to support
crowd workers. Based on our overall findings resulting from all the three studies, we introduce ModOp, a tool that helps to
design crowdsourcing microtasks that are suitable for diverse crowd work environments. We empirically show that the use of
ModOp results in reducing the cognitive load of workers, thereby improving their user experience without effecting the accuracy
or task completion time