WASHINGTON, D.C. — Remote work became one of the defining economic stories of the pandemic and continues to shape the conversation about the future of work and cities. But even on a seemingly simple question — how many people are working from home — surveys conducted by different entities during the pandemic produced estimates that ranged from about one in five workers to more than half of the workforce working remotely.
While some survey choices that led to these divergences have been fixed, other choices continue to differ and thus contribute to varying estimates of who is working remotely, and how often, today.
In a recent paper in the Review of Income and Wealth, we found that the discrepancy results from choices about what surveys measure, whom they include, and how they ask the questions.
A Gallup-partnered study, the Remote Life Survey (RLS), shows just how those choices can move the headline work-from-home number of 53.5% to potentially much lower rates — and why precision in measurement matters to leaders who make data-driven decisions about work.
The RLS, conducted by web using the probability-based Gallup Panel in October–November 2020, asked U.S. adults the following:
“In the past month, about how often did you work from home as part of your job?”
- Never
- Once or twice
- About once a week
- 3–4 times a week
- I always worked from home.
Respondents were also asked a parallel question about their pre-pandemic work-from-home habits before Feb. 1, 2020, with a slightly expanded scale that added “A few times a year” and “About once a month” to capture the lower frequencies more common before the pandemic.
Among workers who were employed at the time of the survey, the RLS study found that the percentage working from home varied considerably, depending on how that circumstance was defined. At the strictest end, 31.6% said they always worked from home. Expanding the definition to include those working remotely 3-4 times a week brought that figure to 41.2%. Adding those who worked from home about once a week yielded 46.9%, and including those who did so once or twice in the prior month produced the broadest estimate of 53.5%.
In other words, the RLS alone can generate a range of more than 20 percentage points simply based on where you draw the line. The rightmost RLS data point shown in Figure 1 reflects the broadest definition — sometimes or always working from home — which totals 53.5% and includes all respondents who worked remotely at least once in the prior month.
Yet during the same period, the U.S. Current Population Survey (CPS), the government’s flagship labor force survey, reports that only about 20% of workers were teleworking. Estimates from other academic and private surveys fell between those of the RLS and CPS. These include Gallup’s COVID-19 tracking data, the Real-Time Population Survey in partnership with the Dallas Federal Reserve (Bick et al.), and the Survey of Working Arrangements and Attitudes (SWAA) in partnership with researchers from Stanford University and the Atlanta Federal Reserve (Barrero et al.). These results were generally in the high 30s to high 40s in the same fall 2020 time frame.
The RLS shows how much the topline depends on the definition of remote work: counting only those who always work from home produces a much lower estimate than counting anyone who worked from home at least once in the prior month. The full paper examines four main sources of divergence: mode of data collection, inclusion of self-employed workers, occupational composition, and survey administration and design — particularly whether surveys count total remote work or only pandemic-induced remote work. Each one nudges the estimate a few percentage points, and together they account for most of the gap between the highest and lowest estimates.
The fourth source of divergence is easy to overlook: even within a single survey, how you define working from home changes the headline number considerably.
As the earlier figure shows, the RLS alone produces estimates ranging from 31.6% to 53.5%, depending on where you draw the definitional line, from always working from home to working from home at least once or twice in the previous month. Many surveys often make similar but undisclosed definitional choices related to the sampling or question wording, for instance, which means two surveys asking ostensibly the same question may be counting respondents differently before any other methodological differences come into play.
Web-Only Versus Mixed-Mode Surveys
Many pandemic-era remote-work numbers come from web-based surveys, including the SWAA and the Real-Time Population Survey, both of which were conducted entirely online. The CPS used in-person and telephone interviews, and the RLS uniquely combined web interviews with a mail survey that reached adults without internet access. These features and the random sampling employed by the CPS and RLS are important because people reachable and willing to respond online are also more likely to have jobs that can be done remotely. When we compare the two modes in the RLS, we find that:
- 31.4% of web-only respondents “always” worked from home, versus just 6.3% of mail-only respondents.
- After accounting for the small percentage of mail respondents (623 out of 6,672 total), relying on web-only responses would overstate the “always WFH” rate by about 1.6 percentage points, and the “mostly WFH” rate by about 0.9 points.
These are modest corrections, but for organizations that are interested in benchmarking to other organizations, a one- to two-point shift is not trivial. They also illustrate a broader principle: whom you can reach is already a filter on what you end up measuring. And with remote work, failing to account for people who cannot be reached by web surveys may influence the conclusions from the beginning, based on the sample construction.
Do the Self-Employed Count?
Another decision is whether surveys include people who work for themselves. The American Time Use Survey (ATUS) by the U.S. Bureau of Labor Statistics, one of the most-cited pre-pandemic benchmarks of how and where Americans spend their working hours, effectively excludes the self-employed from its work-from-home estimates because annualized earnings are not observed for that group. Thus, many researchers drop these respondents from the sample.
When one compares the RLS pre-pandemic estimates against those from the ATUS, a notable gap emerges: ATUS estimates that about 8% of workers worked from home at least once a week before the pandemic, while the RLS — which asked the same work-from-home question for both the pre- and post-pandemic periods within the same survey — puts that figure at about 21%.
Using self-employment indicators in the RLS, we find that self-employed workers are dramatically more likely to work from home: Being self-employed increased the likelihood of working from home by 46.6 points before the pandemic and by 27.9 points after its onset. Excluding them reduces the pre-COVID-19 work-from-home rate by roughly three percentage points — about a quarter of the RLS–ATUS gap.
This matters for anyone using pre-COVID-19 survey data as a baseline to measure how much remote work increased during the pandemic. If the baseline understates how many people were already working from home before COVID-19, because it excludes self-employed workers who were disproportionately remote, then the apparent increase during the pandemic will look larger than it truly was. The pandemic shock gets inflated not because more people switched to remote work, but because the starting point was artificially low.
Which Jobs Are in Your Sample?
Not all jobs are easily done remotely, if at all. That sounds obvious, but the statistical consequences are large. Comparing the occupational distribution in RLS versus CPS, the paper finds statistically significant differences in the overall mix of occupations. RLS has more workers in education, training and library occupations, as well as in some high-remote-work categories such as computer and mathematical, legal, and life, physical, and social science roles, while CPS has more workers in service, sales, manufacturing and production, transportation, and financial/insurance/real estate/accounting roles.
We gauge the role of occupational composition by reweighting the sample. Among workers employed both before and after COVID-19 onset, the RLS at-least-weekly WFH rate falls from 51% to 43% when reweighted to match the CPS occupational mix. For workers newly adopting at-least-weekly WFH during the pandemic, the rate falls from 26% to 20%.
What Exactly Are We Asking?
The largest source of divergence turns out not to be whom you survey, but what you ask them to report. For much of the pandemic, the CPS asked: “At any time in the last four weeks, did you telework or work at home for pay because of the coronavirus pandemic?”
That wording effectively turns remote work into a flow concept: work from home caused by COVID-19, not work from home in general. It instructs interviewers to code “No” for people who already worked entirely from home before the pandemic. By contrast, the RLS simply asks, “In the past month, about how often did you work from home as part of your job?” with no reference to COVID-19.
When we reproduced the CPS logic by excluding workers who were already remote before February 2020, the RLS headline rates fell sharply: “Always WFH” dropped from 31.6% to 23.6% (an eight-point decline) and “Sometimes or always WFH” dropped from 53.5% to 28.2% (a 25-point decline).
Under this “new remote worker because of COVID-19” definition, RLS finds that about 23 percentage points of workers newly shifted into always working from home, and about 28 percentage points into sometimes or always working from home. The CPS estimate at the time was 22 percentage points. In other words, once you align the object of measurement — adoption due to COVID-19 rather than total remote level — the gap between CPS and RLS largely disappears. The dramatic difference between “about 20% remote” and “about half remote” is mostly about whether you’re counting workers who were working remotely before the pandemic.
A separate and later (July 13-Aug. 12, 2021) wording experiment that we ran with Google Consumer Survey reinforces that result. When respondents were asked the CPS-style question including “because of the coronavirus pandemic,” 19% said they had worked from home in the past four weeks; when the COVID-19 clause was removed, that share rose to 23%. A four-point change from five extra words.
Bottom Line
These findings are not unique to the pandemic period. A recent working paper (PDF download) by Shelby Buckman, Jose Barrero, Nicholas Bloom and Steven Davis, drawing on nine U.S. data sources from 2023 to 2025, reaches a strikingly similar conclusion: headline WFH estimates that range from 15% to 35% or more in their raw form narrow to a band of roughly 18% to 28%, once the WFH concept, target population and question design are aligned. Their preferred estimate — that about one-quarter of paid workdays are now performed remotely — emerges only after careful harmonization of the kind this blog has described.
For leaders, the lesson is not that the data are unreliable. It is that measurement design must be part of organizational strategy, rather than an afterthought or technical detail. If you are making decisions based on survey data about how and where people work, consider this:
1. Interrogate the concept, not just the number: Before acting on a statistic, ask: What exactly is being measured? Is this a level (any remote work), an intensity (days per week) or a change attributable to a specific cause (because of COVID-19)? Are certain groups, such as the self-employed or offline workers, excluded by design?
2. Treat survey design as an investment, not a commodity: Low-cost, generic surveys may be adequate when the stakes are low. But when you are considering long-term decisions about office space, labor markets or hybrid policy, the difference between a hastily assembled questionnaire and a rigorously designed instrument can easily be 10 to 20 percentage points. To build trust in the results, the underlying measurement needs to be robust: carefully sampled, thoughtfully worded and transparent about inclusion criteria.
3. Partner with experts who can explain the tradeoffs: Every design choice — mode of data collection, weighting, question phrasing — has consequences. Leaders should work with survey and measurement experts who can quantify those consequences and help align the metrics with the decisions at hand, rather than simply delivering a topline number.
4. Know your population and stay within its boundaries: Every survey speaks for a specific population, and conclusions should never travel further than the data allow. A question framed around pandemic-induced changes cannot be cited as a measure of total remote work levels; a survey that excludes the self-employed cannot characterize the full labor force.
Before acting on a number, leaders should ask: Does this sample actually include the workers most relevant to my decision — frontline employees, contractors, those without reliable internet access? The most costly measurement mistake is not a poorly designed survey — it is a well-designed one applied outside the boundaries of the population it was built to describe.
Remote work is only one example. The same logic applies to measuring employee engagement, wellbeing, skills, AI adoption and more. Small differences in what you measure and whom you reach can produce differing aggregate numbers — and, in turn, differing narratives about your workforce and your future. In an environment where organizations are awash in data, the real advantage goes to those who insist on precision.
Stay up to date with the latest insights by following @Gallup on X and on Instagram.
