I tried DoorDash’s Tasks app—here’s what it reveals about the next wave of AI gig work
DoorDash’s new Tasks app pays people to film everyday chores and label data for AI. It’s a glimpse into a growing labor market that blends content creation, data labeling, and precarious gig economics.
Background
The artificial intelligence boom created an insatiable demand for data: voices to transcribe, images to annotate, edge cases to catch, and increasingly, videos of the physical world with human hands doing everyday things. Models that can see, hear, and act need examples—thousands or millions of them—before they can interpret a kitchen, follow a recipe, recognize a hazard, or manipulate an object.
For a decade, that work happened quietly on platforms like Amazon Mechanical Turk, Appen, and Scale AI’s Remotasks. Workers labeled bounding boxes, judged search results, transcribed audio, and flagged content—often for pennies per task. As multimodal and robotics-leaning AI systems rise, a different kind of dataset is in demand: egocentric footage and tightly structured clips of people interacting with objects in the real world. Tech companies have courted universities, scraped video-sharing platforms, and built large research datasets. But there’s a bottleneck: high-quality, consented, diverse videos that show people doing ordinary, repeatable tasks from many angles and contexts.
Enter a new flavor of gig work: apps that recruit the public to produce this data directly. DoorDash—the food-delivery company synonymous with couriers and takeout—now has a Tasks app that pays people to record short videos of household chores, simple cooking, and daily routines, plus complete annotation and categorization assignments. It turns your phone into a camera crew for AI training and your free time into an on-demand micro-studio.
This is not delivery work. It’s data labor: producing and labeling content that will be fed into machine-learning pipelines. It promises flexibility and fast payouts, with work you can do at home. It also raises new questions about pay, safety, privacy, ownership, and the long tail of how your likeness and your home might persist in an AI model long after the gig ends.
What happened
A reporter tested DoorDash’s Tasks app by completing a sampling of available assignments over several days. The app offered a rotating list of “missions,” each with specific instructions—such as to film a single-take video of yourself performing a chore or to narrate a step-by-step process while your phone stays stationary. Some assignments asked for multiple variations: different lighting, angles, or object substitutions. Others were classic microtasks: classify a clip, verify a label, or check whether instructions match what happens in a video.
- Example tasks included: folding laundry on a flat surface, preparing a basic breakfast item, organizing items on a shelf, and walking along a path while keeping the camera level and steady. Some asked for verbal narration; others required hands-only framing to avoid capturing faces.
- Payouts varied by complexity and length, typically listing a fixed price per clip or per batch of clips, with bonuses for completing all steps exactly as specified. The effective hourly rate depended heavily on setup time, retakes, and the learning curve imposed by exacting instructions.
- Quality control mattered. Tasks were reviewed for adherence to guidelines—lighting, framing, avoidance of copyrighted materials, clear visibility of the object and action, and no identifiable bystanders or sensitive information in the background. Rejections could mean no pay and a requirement to re-shoot.
- The app presented opt-in consent screens describing data use for AI training and quality assurance. Workers could delete uploaded media from their accounts, but it was less clear whether content already used for model training could be fully withdrawn—a well-known tension in AI once data is baked into a model’s parameters.
The overall experience felt like filming instructional B‑roll on demand. It’s part creator economy, part content moderation, part assembly line. On a good run, a motivated worker could queue several tasks and achieve decent throughput. On a bad run, instructions proved finicky, resulting in reshoots, diminished pay per hour, and mounting frustration.
Why DoorDash—of all companies—wants this data
DoorDash’s consumer brand is delivery, but its infrastructure is data-rich: dynamic pricing, routing, logistics, and now, conversational shopping and support bots. Building AI that better understands items, instructions, kitchens, and stores benefits everything from support automation to in-store picking and future robotic helpers. Even if the company isn’t building a household robot, the broader AI ecosystem is scrambling for videos of real-world manipulations. Selling or licensing high-quality, consented training data is itself a business.
From an AI perspective, clips of humans completing consistent, labeled micro-actions are gold. They help models learn:
- Visual grounding: “This is a whisk. This is a spatula. This is scrambling vs. stirring.”
- Procedural reasoning: What steps come before others in a task like making eggs or folding a shirt.
- Egocentric perception: How motion blur, occlusions, and viewpoint changes look from a person’s perspective.
- Language-action alignment: Mapping natural language instructions to visual actions—and vice versa.
The Tasks app operationalizes all of this by turning kitchens, bedrooms, and sidewalks into ad hoc training studios.
The economics—piecework with a production twist
As with other gig platforms, the pay is piece-rate. A small but real production burden shifts onto workers:
- Time goes not only into the action itself but also into staging: clearing counters, checking lighting, angling the phone, ensuring audio clarity, and avoiding unintended identifiers.
- Rejections are costly. A single violation—say, a brand logo appears or a family photo is in frame—can render a clip unusable.
- There’s a skill curve. Workers who become adept at quickly staging, shooting, and batch-submitting can raise their effective hourly rate; newcomers will likely earn less while learning.
This is creative piecework under algorithmic oversight, with an aesthetic that’s less about personality and more about reproducible clarity.
The privacy and consent puzzle
Filming at home introduces risk. Your kitchen might leak more about you than you realize: dietary restrictions, medications, income signals, religious items, children’s artwork with names, or the layout of your space. Even with instructions to avoid faces or identifying information, mistakes happen. And there are technical footprints to consider—metadata embedded in files, ambient audio, and subtle patterns that sophisticated models can infer.
Key questions arise:
- What happens to your data as it flows from app to reviewers to training pipelines? Who else gets it—vendors, cloud providers, research partners?
- How long is it stored, and in what form? If anonymized, how robust is that anonymization against re-identification?
- Can you effectively revoke consent once a model is trained on your data? Today, “machine unlearning” remains more a research frontier than a product guarantee.
- What are the rights of bystanders—housemates, kids, neighbors audible off-camera—whose presence might be captured inadvertently?
Workers should treat this as publishing, not merely uploading. If you wouldn’t put it on a semi-public platform with your name off it but your environment on it, think twice.
Safety and liability
Instructional tasks can create perverse incentives: rush to complete a cooking scene, balance a phone on a precarious surface, or handle hot equipment while narrating. Safety warnings exist, but incentives matter. If a task encourages potentially hazardous behavior—even unintentionally—the platform should own responsibility for clear guardrails and compensation that doesn’t encourage corner-cutting. There’s also the question of minors: filming in family spaces raises compliance obligations around child safety and consent.
Key takeaways
- AI needs the physical world. The next generation of multimodal and robotics-aware systems can’t subsist on scraped internet text. They need grounded, structured, repeatable footage of hands, tools, and spaces. That demand is turning into paid gigs.
- This is data labor dressed as creator work. You’re not just filming; you’re producing an asset to spec. The closer your setup resembles a small studio—even if it’s just good lighting and consistent angles—the better your pay per hour.
- Pay is volatile and often opaque. Effective hourly earnings depend on task mix, approval rates, and reshoots. Optimists will optimize; most workers will face a learning curve with uneven returns.
- Consent today doesn’t guarantee control tomorrow. Once footage is used to train models, withdrawing it becomes technically and contractually challenging.
- Privacy is a first-class risk. Even “hands-only” videos can expose surfaces, habits, and schedules. Small details may be enough to fingerprint a household.
- Quality assurance isn’t just for models; it’s labor governance. Clear, respectful appeals processes for rejections, transparent guidelines, and predictable pay are as important as model accuracy.
- The AI supply chain is decentralizing. Instead of hiring actors and renting studios, companies can crowdsource richly varied, real-world data—shifting cost and risk onto individuals.
What to watch next
- Transparent data governance commitments. Will platforms publish clear data retention schedules, detail downstream sharing, and provide meaningful deletion or opt-out paths even after training? Expect regulators and advocacy groups to push for this.
- Minimum standards for AI data work. Policy proposals could set floor rates for piecework, mandate clear task instructions, and require paid rework when guidelines change. In some jurisdictions, worker classification battles may extend to data labor.
- Privacy-by-design task templates. Apps could bake in automatic background blurring, logo masking, and on-device checks to reduce rejections and exposure. The best platforms will protect workers by default.
- Worker tools and communities. Expect cottage industries of tripods, lighting kits, and forums sharing best practices to boost throughput. Browser extensions long existed for microtasking; mobile-friendly equivalents for video gigs will follow.
- Data dignity and revenue sharing. The idea that people deserve ongoing compensation when their data underpins profitable models is gaining steam. We may see experiments in licensing, royalties, or collective bargaining for datasets.
- Platform convergence. Delivery and ride-hailing companies already manage huge contractor pools. Don’t be surprised if others launch similar apps to monetize downtime and diversify revenue with AI training work.
- Legal tests around biometric and location data. US state laws like Illinois’ BIPA and comprehensive privacy statutes could shape how face, voice, and home layouts are handled—and the penalties for misuse.
Practical advice if you’re considering this work
- Treat your space like a set. Remove paperwork, photos, mail, and distinctive items from the frame. Use neutral backgrounds and consistent lighting.
- Control audio. Turn off TVs, smart speakers, and music. Close windows to avoid capturing neighbors or passersby.
- Check metadata. If possible, strip GPS tags and other metadata before upload (or ensure the app does). Avoid filming near house numbers or unique exterior features.
- Batch and iterate. Practice a task, then shoot multiple variations in one session to amortize setup time.
- Track your time. Calculate real hourly earnings, including reshoots. If a task keeps slipping under your target rate, skip it.
- Read the terms. Look for arbitration clauses, content ownership, data-sharing disclosures, and appeal mechanisms for rejected tasks.
The bigger picture: a new labor market with old frictions
We’ve been here before. Content moderation taught us that the web runs on unseen labor, often performed under pressure with little protection. Search relevance and ad quality were built by armies of raters. What’s different now is the intimacy of the content. Your workplace is your home. Your product is your motion, your voice, your habits.
There’s dignity in skilled, careful data work—and there’s exploitation when incentives pit speed against safety and privacy. The gap between those outcomes will be determined by platform design, regulation, and worker organization. Companies have a chance to build this market responsibly: pay fairly, design for safety, minimize data retention, and prove—publicly—that they can train powerful systems without siphoning more than they pay back.
If AI is the new electricity, then data labor is its power grid. The wires run through our living rooms now. That makes the engineering choices, and the ethics, impossible to ignore.
FAQ
What kinds of tasks does a mobile AI data app typically offer?
- Short video recordings of everyday actions (cooking, cleaning, organizing)
- Hands-only demonstrations of object interactions
- Narrated instruction-following clips
- Classification and quality checks on previously recorded clips
How much can you earn?
- Payouts are piece-rate and vary by task complexity and time. Effective hourly rates depend on setup time, reshoots, and approval rates. Track your time to estimate what you truly earn per hour.
Is it safe to film at home?
- Only if you manage risk. Remove identifiers, control audio, and avoid capturing faces, children, or sensitive documents. Consider creating a neutral “set” area just for filming.
Who owns the videos after upload?
- Typically, you grant broad licenses for use in training and improving AI systems. Review the terms to understand ownership, retention, and whether you can request deletion.
Can you delete your data later?
- You may be able to delete files from your account. However, once data trains a model, full withdrawal is technically hard and rarely guaranteed.
What happens if a task is rejected?
- You may receive feedback and can reshoot. Often, rejected work isn’t paid. Read the guidelines closely and appeal if there’s a clear mistake in review.
Could this kind of work replace delivery gigs?
- It’s more likely to be supplemental and variable. Availability can fluctuate, and skill affects earnings. Don’t expect stable, full-time income without significant optimization.
Will this help AI take physical jobs?
- It contributes to models that better understand and act in the world. Whether that displaces jobs or augments them depends on downstream applications and policy choices.
Source & original reading: https://www.wired.com/story/i-tried-doordashs-tasks-app-and-saw-the-bleak-future-of-ai-gig-work/