Showing posts with label satellites. Show all posts
Showing posts with label satellites. Show all posts

Saturday, February 18, 2017

Targeting poverty with satellites


[This post is co-authored with Matt Davis, co-author and RA extraordinaire...]

About six months ago, our Stanford SustainLab crew had a paper in Science showing that you can make pretty good predictions about local-level economic wellbeing in Africa by combining satellite imagery with fancy tools from machine learning.  To us this was (and is) a promising finding, as it suggests a way to address the fundamental lack of data on economic outcomes in much of the developing world.   As has been widely acknowledged, these data gaps inhibit our ability to both evaluate what interventions reduce poverty and to target assistance to those who need it most.

A natural question that comes up (e.g. here) is: are these satellite-based estimates good enough to actually be useful for either evaluation or targeting?  Our original paper didn't really answer that question.  We're in the process of putting together a follow-up paper that looks at this question on an expanded country set and with an improved machine learning pipeline, but in the meantime we [by which I mean "we", meaning Matt and Neal] wanted to use some of the data from our original paper to more quantitatively explore this question.

Folks have been thinking for decades about how whether using geographic information to inform the targeting of anti-poverty programs could improve their efficiency.  The standard thought experiment goes like this.  Imagine you're a policymaker who has a fixed budget F that she can distribute as cash transfers to anyone in the country (this sort of cash transfer program happens all the time these days, it turns out).  Lets say in particular that the poverty metric that this policymakers cares about is the squared poverty gap (SPG), a common poverty measure that takes into account the distance of individuals from the poverty line.  [If you're having trouble sleeping at night:  For a given poverty line P, an individual with income Y<P has a poverty gap of P-Y and an SPG of (P-Y)^2.  The SPG in a region is the average over all individuals in that region, where anyone with Y>=P has a SPG==0.  So this measure gives a lot of weight to people far below the poverty line].  If you're goal is to reduce the SPG, you do best by giving money to the poorest person you can find until they're equal to the next poorest, giving them both enough money until they're equal to the third poorest, and so-on.

So how should the policymaker distribute the cash?  If she knows nothing about where poor people are, a naive approach would be to distribute money uniformly -- i.e. to just give each of n constituents F/n dollars.  Clearly this could be pretty inefficient, since people already above the poverty line will get money and this won't reduce the SPG.

An alternate approach, now a few decades old in the economics literature, has been to construct "small area estimates" (SAE) of poverty by combining a detailed household survey with a less detailed but more geographically-comprehensive census.  The idea is that while only the household survey measures your outcome of interest (typically consumption expenditure), there are a small set of questions common to both the detailed household survey and the census (call these X).  These are typically questions about respondent age, gender, education, and perhaps a few basic questions on assets.  So using the household survey you can fit a model Y = f(X), which tells you how the Xs map into consumption expenditure (your outcome of interest), and then using the same Xs in the census and your model f(X) to predict consumption expenditure for everyone in the census.  Then you can aggregate these to any level of interest (e.g. village or district), and use them to potentially inform your cash transfers.  This has been explored in a number of papers, e.g. here and here, and apparently has been used to inform policy in a number of settings.

Our purpose here is to compare a targeting approach that uses our satellite-based estimates to either the naive (uniform) transfer or a transfer that's informed by SAE estimates.   To actually evaluate these approaches against each other, we are going to just use the household survey data, aggregated to the cluster (village) level.  In particular, we estimate both the SAE and the satellite-based model on a subset of our household survey data in each country, make predictions for the remainder of the data that the models have not seen, and in this holdout sample, evaluate for a fixed budget the reduction in the SPG you'd get if you allocated using either the naive, SAE, or satellite-based model.  The allocation rule for SAE and satellites is the one described above:   giving money to the poorest village until equal to the next poorest, giving them both enough money until they're equal to the third poorest, and so-on.

Below is what we get, with the figure showing how much reduction in SPG you get from each targeting scheme under increasing program budgets.  The table summarizes the results, showing the cross-validated R2 for the SAE and satellite-features models (the goodness-of-fits from the models that we then use to make predictions that are then used in targeting), and the amount of money each approach saves relative to the uniform transfer to achieve a 50% decline in SPG.



R-squared% Reduction in budget to achieve 50% decline in SPG
CountrySAEFeaturesSAEFeatures
Malawi0.450.446.88
Nigeria0.380.430.725.9
Tanzania0.620.543618.3
Uganda0.640.4439.120.7

What do we learn from these simulations?  First, geographical targeting appears to save you money relative to the naive transfer.  You achieve a 50% reduction in SPG for 7-40% less budget than a naive transfer when you use either SAE or satellites to target.  Second, and not surprisingly, when the satellite model and the SAE model fit the data roughly equally well (e.g. Malawi, Nigeria), they deliver similar savings relative to a uniform transfer.  But the amount of budget that you save by using SAE or satellites to target transfers can differ even for similar R2.  Compare Malawi to Nigeria:  the targeted approaches help a lot more in Nigeria than in Malawi, which is consistent with Malawi having poor people all over the place (e.g. see the maps we produced for Malawi and Nigeria)  including in somewhat better-off vilages, which in turn makes targeting on the village mean not as helpful.  Third, SAE leads to more efficient targeting in the two countries where the SAE model has more predictive power -- Tanzania and Uganda.

We're somewhat biased of course, but this to us is fairly promising from a satellite perspective.   First, these SAE estimates are probably and upper bound on actual SAE performance, since it's very rarely the case that you have a household survey and a census in the same year, and we've been generous in the variables we included to calculate the SAE (some of which are not be available in many censuses).  Second, since many countries lack either a census or a household survey, it's not clear whether we can use SAE in these countries, whereas in our Science paper we showed decent out-of-country fits for the satellite-based approach.  Third, we're working on improvements to the satellite-based estimates and anticipate meaningfully higher R2 relative to these benchmarks.  And finally, and perhaps most importantly, the satellite-based approach is going to be incredibly cheap to implement relative to SAE in areas where surveys don't already exist.  So you might be willing to trade off some loss in targeting performance given the low expense of developing the targeting tool. 

So our tentative conclusion is that satellites might have something to offer here.  They're probably going to be even more useful when combined with other approaches and data -- something that we are exploring in ongoing work. 

Thursday, August 18, 2016

Economics from space

We've got a paper out in Science today that demonstrates a new way to use satellite imagery to predict economic well-being in poor countries (see project website here).  The paper is a collaboration between some of us social scientists (or social "scientists", with emphatic air quotes, as my wife puts it) and some computer scientists across campus -- folks who have apparently figured out how to use computers for more than email and Youtube surfing.

We're hoping that this is the first of many projects with these guys, and so have codified our collaboration here, with one of those currently-popular dark-hued website designs where you scroll around a lot.

So why is it sensible to try to use satellite imagery to predict economic livelihoods?  The main motivation is the lack of good economic data in many parts of the developing world.  As best we can tell, between the years 2000 and 2010, one quarter of African countries did not conduct a survey from which nationally-representative poverty estimates could be constructed, and another 40% conducted only one survey.  So this means that in two-thirds of countries on the world's poorest continent, you've got very little sense of what's going on, poverty-wise.  And even a lot of the surveys that do get conducted are only partially in the public domain, meaning you've got to employ some trickery to even get the shape of the income distribution in these countries (and survey locations are still unavailable!).

This lack of data makes it hard to track specific targets that we've set, such as the #1 Sustainable Development Goal of eliminating poverty by 2030.  It also makes it hard to evaluate whether specific interventions aimed at reducing poverty are actually working.  The result is that we currently have little rigorous evidence about the vast majority of anti-poverty interventions undertaken in the developing world, and no real way to track progress towards SDGs or any other target.

While we don't collect a lot of survey data for many locations in the developing world, we collect other sources of information about these places constantly -- satellite information being one obvious source.  So our goal in this paper was to see whether we could use recent hi-res recent imagery to predict economic outcomes at a local level, and fill in the gaps between what we know from surveys.

We are certainly not the first people to think of using satellites or other "unconventional" data sources to study economic output in the developing world.  For instance, here is a 2012 paper by Adam Storeygard that uses nightlights to improve GDP estimates at the country level, and here is a paper from about 9 months ago by Josh Blumenstock and company where they use call data records from a cell phone company to predict local-level economic outcomes in Rwanda.  But what our approach brings to the table is that (unlike Storeygard et al) we can make very local predictions, and that (perhaps unlike Blumenstock et al) our approach is very easy to scale, given that satellite imagery are available free or at very low cost for every corner of the earth and more rolls in each day.

For a quick explanation of what we do in the paper, check out this short video that we made in collaboration with these guys.  Sort of an experiment on our end, comments or slander welcome in the comments section.



The main innovation in the paper is in figuring out what information in the hi-res daytime imagery might be useful for predicting poverty or well-being.  Standard computer vision approaches to interpreting imagery typically get fed huge training datasets - e.g. millions of "labeled" images (e.g. "dog" vs "cat") that a given model can use to learn to distinguish the two objects in an image.  But the whole problem here is that we have very little training data -- i.e. few places where we can tell a computer with certainty that a specific location is rich or poor.

So take a two-step approach to solving this problem.  First, we use lower-resolution nightlights images to train a deep learning model to identify features in the higher-resolution daytime imagery are predictive of economic activity. The idea here -- building on the paper cited above -- is that nightlights are a good but imperfect measure of economic activity, and they are available for everywhere on earth. So the nightlights help the model figure out what features in the daytime imagery are predictive of economic activity.  Without being told what to look for, the model is able to identify a number of features in the daytime imagery that look like things we recognize and tend to think are important in economic activity (e.g roads, urban areas, farmland, and waterways -- see Fig 2 in our paper).

Then in the last step of the process, we use these features in the daytime imagery to predict village-level wealth, as measured in a few household surveys that were publicly available and geo-referenced.  (As our survey measures we use data from the World Bank LSMS for consumption expenditure and from the DHS for assets.)  We call this two step approach "transfer learning", in the sense that we've transferred knowledge learned in the nightlights-prediction task to the final task of predicting village poverty.  Nightlights are not used in the final poverty prediction; they are only used in the first step to help us figure out what to use in the daytime imagery.

Josh Blumenstock (or some Science art editor) have a really nice depiction of the procedure, in a commentary that Josh wrote on our piece that also appeared today in Science.

The model does surprisingly well.  Below are cross-validated model predictions and R-squareds for consumption and assets, where we are comparing model predictions against survey measurements at the village level in five African countries (Uganda, Tanzania, Malawi, Nigeria, Rwanda).  The cross-validation part is key here -- basically we split the data in two, train the model on one part of the data, and then predict for the other part of the data that the model hasn't seen.  This guards against overfitting.



We can then use these predictions to make poverty maps of these countries.  Here is a prototype (something we're still working on), with estimates aggregated to the district level:

[Edit:  Tim Varga pointed out in an email that, while beautiful, the below plot is basically meaningless to the 10% of men and 1% of women who are red/green colorblind.  Duh - and sorry!  (Only silver lining is that this mistake harmed men differentially, subverting the normal gender bias).  Nevertheless, we will fix..]


Maybe the most exciting result is that a model trained in one country appears to do pretty well when applied outside that country, at least within our set of 5 countries.  For example, a model trained in Uganda does a pretty good job of predicting outcomes in Tanzania, without ever having seen Tanzanian data.  Granted, this would likely work a lot worse if we were trying to make predictions for a more dissimilar country (say, Iceland).  But it suggests that at least within Africa -- the continent where data gaps remain largest -- our approach could have wide application.

Finally, we don't really view our approach as a substitute for continuing to do households surveys, but rather as a strong compliment -- as a way to dramatically amplify the power of what we learn from these surveys.  It's likely that we're going to continue to learn a lot from household surveys that we might never learn from satellite imagery, even with the fanciest machine learning tricks.

We are currently trying to extend this work in multiple directions, including evaluating whether we can make predictions over time using lower-res Landsat data, and in scaling up the existing approach to all of Africa.  More results coming soon, hopefully.  We also want to work with folks who can use these data, so if that happens to be you, please get in touch!  

Wednesday, February 11, 2015

Measuring yields from space


[This post is co-written with Florence Kondylis at the World Bank, and a similar version was also posted over at the World Bank's Development Impact blog.]

One morning last August a number of economists, engineers, Silicon Valley players, donors, and policymakers met on the UC-Berkeley campus to discuss frontier topics in measuring development outcomes. The idea behind the event was not that economists could ask experts to create measurement tools they need, but instead that measurement scientists could tell economists about what was going on at the frontier of measuring development-related outcomes.  One topic that generated a lot of excitement -- likely due to David Lobell's charm at the podium -- was the potential for a new crop of satellites to remote-sense (i.e. measure) important development outcomes.

Why satellite-based remote sensing?
The potential ability to use satellites to measure common development outcomes of interest excites researchers and practitioners for a number of reasons, chief among them the amount of time and money we typically have to spend to measure these outcomes the “traditional” way (e.g. conducting surveys of households or firms).  Instead of writing large grants, spending days traveling to remote field sites, hiring and training enumerators, and dealing with inevitable survey hiccups, what if instead you could sit at home in your pajamas and, with a few clicks of a mouse, download the data you needed to study the impacts of a particular program or intervention?

The vision of this “remote-sensing” based approach to research is clearly intoxicating, and is being bolstered by the vast amount of high-resolution satellite imagery that is now being acquired and made available.  The recent rise of “nano-“ or “micro”-satellite technology – basically, fleets of cheap, small satellites that image the earth in high temporal and spatial resolution, such as those being deployed by our partner Skybox – could hold particular promise for measuring the types of outcomes that development folks often care about.  This is perhaps most obviously true in agriculture, where unlike in the manufacturing sector, most production takes place outside.

How does it work?
For most agricultural crops – particularly the staple crops grown by African smallholders, such as maize – pretty much anyone can look at a field and see the basic difference between a healthy highly productive crop and low-yielding crop that is nutrient or moisture stressed.  One main clue is color: healthy vegetation reflects and absorbs different wavelengths of light than less-healthy vegetation, which is why leaves on healthy maize plants look deep green and leaves on stressed or dead plants look brown.  Sensors on satellites can also discern these differences in the visual wavelengths, but they also measure differences at other wavelengths, and this turns out to be particularly useful for agriculture.  Healthy vegetation, in turns out, absorbs light in the visible spectrum and reflects strongly in the near infrared (which the human eye can’t see), and simple ratios of reflectance at these two wavelengths form the basis of most satellite-based measures of vegetative vigor – e.g. the familiar Normalized Difference Vegetation Index, or NDVI.   High ratios basically tell you that you’re looking at plants with a lot of big, healthy leaves.

The trick is then to be able to map these satellite-derived vegetation indices into measures of crop yields.  There are two basic approaches (see David's nice review article for more detail).  The first combines satellite vegetation indicies with on-farm yield observations as collected from the typical household or agricultural surveys.  By regressing the “true” survey-based yield measure on the satellite-based vegetation index, you get an estimated relationship between the two that can then be applied to other agricultural plots that you observe in the satellite data but did not survey on the ground.  The second approach combines the satellite data with pre-existing estimates of the relationship between final yield and vegetative vigor under various growing conditions (often as derived from a crop simulation model, which you can think of as an agronomist’s version of a structural model). Applying satellite reflectance measures to these relationships can then be used to estimate yield on a given plot.  A nice feature of this second approach is that it is often straightforward to account for the role of other time-varying factors (e.g. weather) that also affect the relationship between vegetation and final yield.

How well does it work?
These approaches have mainly been applied to larger farm plots in the developed and developing world, at least in part because until very recently the available satellite imagery was generally too coarse to resolve the very small plot sizes (e.g. less than half an acre) common in much of Africa.  For instance, the resolution of the MODIS sensor is 250m x 250m, meaning one pixel would cover more than 15 one-acre plots. Nevertheless, these approaches have been shown to work surprisingly well on these larger fields.  Below are two plots, both again from David and co-author's work, showing the relationship between predicted and observed yields for wheat in Northern Mexico, and maize (aka “corn”, for Americans) in the US Great Plains.   Average plot sizes in both cases are > 20 hectares, equivalent to at least 50 one-acre plots.


Top plot:  wheat yields in northern Mexico, from Lobell et al 2005.  Bottom plot:  corn yields in the US Great Plains, from Sibley et al 2014



Although success is somewhat in the eyes of the beholder here, the fit between observed and satellite-predicted yields is pretty good in both of these cases, with overall R2s of 0.63 in the US case and 0.78 in the Mexico case.  And, at least in both of these cases, the “ground truth” yield data was not actually used to construct the yield prediction – i.e. they are using the second approach described above.  This was possible in this setting because these were crops and growing conditions for which scientists have a good mechanistic understanding of how final yield relates to vegetative vigor.

From rich to poor, big to small
Applying these approaches much of the developing world (e.g. smallholder plots in Africa) has been harder.  This is not only because of the much smaller plot sizes, and thus the difficulty (impossibility, often) of resolving them in existing satellite imagery, but also because of a lack of either (i) ground truth data to develop the satellite-based predictions, and/or (ii) a satisfactory mechanistic understanding in these environments of how to map yields to reflectance measures.

New data sources from both the ground and sky are starting to make this possible.  Sensors on the new micro-satellites mentioned above often have sub-meter resolutions, meaning smallholder plots are now visible from space (a half-acre plot would be covered by over 2000 pixels).  Furthermore, this imagery is being acquired often enough to ensure at least a few cloud-free images during the growing season -- not a small problem in the rainy tropics.

Working with David and some collaborators in Kenya, Uganda, and Rwanda, we are linking this new imagery with ground-based yield data we are collecting to understand whether the satellite data can capably predict yields on heterogeneous smallholder plots.  Below is a map of some of the smallholder maize fields we have mapped and are tracking in Western Kenya, as part of an ongoing experiment with smallholder farmers in the region.

Locations of some plots we are tracking in Western Kenya

Some of the long run goals of this work are to (i) allow researchers who have already have information on plot boundaries and crop choice to use satellite images to estimate yields, and (ii) to allow researchers who do not have plot boundaries but who are interested in broader-scale agricultural performance (eg. at the village or district level) a way to track yields at that scale. This work is ongoing, but given the experience in developed countries, we are hopeful.

Some challenges.
Nevertheless, there are clear challenges to making this approach work at scale, and clear limitations (at least in the near term) to what this technology can provide.   Here are a few of the main challenges:

  1. Which boundaries and which crops. To measure outcomes at the level of the individual farm plot, satellite-based measures will be most easily employable if the researcher already knows the plot boundaries and knows what crop is being grown.   As satellite imagery improves and as computer vision algorithms are developed to remotely identify plot boundaries, both of these constraints will likely be relaxed, but the researcher will still need some ground information on which plots belong to whom. 
  2. Measurement error. Even with plot boundaries in hand, the fact that satellite imagery will not be able to perfectly predict yields means that using satellite-predicted yields as an outcome will likely reduce statistical power (although it’s not immediately clear how much noisier satellite estimates will be, given that survey based measures of these outcomes – e.g. from farmer self reports – are likely also measured with error.)  This almost certainly means that this technology will not be equipped to discern small effects in the smaller-sized ag RCTs that often get run.
  3. Moving beyond yield.  Finally, even with plot boundaries in hand and well-powered study, satellites are going to have a hard time measuring many of the other outcomes we care about – things like profits or consumption expenditure.  Satellites might in the near term be able to get at related outcomes such as assets (something we’re also working on), but it’s clearly going to be hard to observe most household expenditures directly. 


Putting these difficulties together, should we just abandon this whole satellite thing?  We think not, for two reasons.  The first reason is that as we (hopefully) improve our ability to accurately measure smallholder yields from space, this ability would provide a clear compliment to existing surveys.  For instance, if yields are a primary outcome, imagine just being able to do a baseline survey (where field boundaries are mapped) and then measure your outcome at follow-up from the sky.  This will make an entire study both faster and cheaper, which should allow for larger sample sizes, which will in turn help deal with the measurement error issue above.

Second, we still have a surprisingly poor understanding of why some farms, and some farmers, appear to be so much more productive than others.  Is it the case that relatively time-invariant factors like soil type and farmer ability explain most of the observed variation, or are time-varying factors like weather more important?  Satellite data might be particularly useful for this question (David's review paper, and his earlier G-FEED post, gives some really nice examples), because you can assemble huge samples of farm plots that can then be easily followed over time.  Satellite data in this setting therefore might afford more power, and you can do it all in your pajamas.


Thursday, May 22, 2014

Regressing to the mean

As a general rule, people like to take credit when good things happen to them, but blame bad luck when things go wrong. This probably helps us all get through the day, feeling better about ourselves and other people we like. For example, I’d like to think that the paper rejection I got last month was bad luck, while the award I got last year was totally deserved. But anyone who pays attention to professional sports, or the stock market, knows that success in any single day or even year has a lot to do with luck. It’s not that luck is all you need, but it often makes the difference between very evenly-matched competitors. That’s why people or teams who perform particularly well in one year tend to drop back the next, a.k.a. regressing to the mean.

Take tennis. There’s clearly a skill separation between professionals and amateurs, and between the top four or five professionals and everyone else. But among the top, it’s hard to know who will win on any day, and it’s very hard to sustain a streak of victories against other top players. So even when someone like Raphael Nadal, who has some remarkable winning streaks, talks about how he got a few key bounces, he’s as much being an astute observer as a gracious winner.

Or the stockmarket. It’s well known that even the best fund managers have a hard time outperforming the market for a long time. Even if someone has beaten it five years in a row, there’s a good chance it was just luck given how many fund managers are out there. I don’t tend to watch interviews with fund managers as much as athletes, but something tells me they might be a little less inclined to turn down the credit.

So what about agriculture? It’s not a source of entertainment or income for most of us, so we don’t spend much time thinking about it, and you won’t find any posts on fivethirtyeight about it. But one thing that struck me early on working in agriculture is how farmers are just as prone to thinking they are above average as the rest of us. More specifically, if you ask why some fields around them don’t look as good, they will talk about how that farmer is lazy, has another job, doesn’t take care of his soil, etc.

As far as I can tell, this isn’t a purely academic question. Understanding how much farmers vary in their ability to consistently produce a good crop is important if you want to know the best ways of improving agricultural yields. If there really are a bunch of star performers out there, then letting them take over from the laggards by buying up their land, or training other farmers to use best-practices, could be a good source of growth in the next decade. For example, here’s a cool presentation about a new effort in India to have farmers spread videos of best practices through social networks.

There’s a fairly obvious but not perfect link to the idea of yield gaps. People who say that closing yield gaps are a big “low-hanging fruit” for yield improvement often have a vision of better agronomy being able to drastically raise yields. The link isn’t perfect, because it could be that even the best farmers in a region are underperforming, agronomically speaking, for instance if fertilizers are very expensive. And it could be that some farmers consistently out-perform not because of better management, but because they are endowed with better soils (though this can be sorted out partly by using data on soil properties). Even with these caveats, understanding how much truly better the “best” farmers are could help give a more realistic view of what could be achieved with improved agronomy.

The key here is the “how much” part of it. Nobody can argue that some farmers aren’t better than others, just as nobody can say that Warren Buffet isn’t better than an average fund manager. The question is whether this is a big opportunity, or if it’s best to focus efforts elsewhere. I’ve been trying to get a handle on the “how much” over the years by using satellites to track yields over time. I’ve used various ways of trying to display this in a simple way, but nothing was too satisfying. So let me give it another shot, based on some suggestions from Tony Fischer during a visit to Canberra.

The figures below show wheat yield anomalies (relative to average) for fields ranked in the top 10% for the first year we have satellite data (green line). Each panel is for a different area, and soils don’t vary too much within each area. Then we track those fields over the following years to see if those fields are able to consistently outperform their neighbors. Similarly we can follow fields in any other group, and I show both the bottom decile (0-10% in blue) and the fifth (40-50% in orange). The two horizontal lines show the mean yield anomaly in the first group for the year they ranked in the top, and their mean yield anomaly in all the other years. If it was all skill (or something else related to a place like the soil quality) the second line would be on top of the first. If it was all luck, the second line would be at zero.


So what’s the verdict? The top performers in the first year definitely show signs of regressing to the mean, as their mean yield drops much closer to the overall average in other years. Similarly, the worst performers “regress” back up toward the mean. But neither jump all the way back to zero, which says that some of the yield differences are persistent. In the two left panels, the anomalies are a little more than one-third the size they were in the initial year. In the right panel, the anomalies are about half their original value, indicating relatively more persistence. That makes sense since we know the right panel is a region where some farmers consistently sow too late (see this paper for more detail).

So a naive look at any snapshot of performance over a year or two would be way too optimistic about how big the exploitable yield gap might be. It’s important to remember that performance tends to regress to the mean. At the same time, there are some consistent differences that amount to roughly 10% of average yields in these regions (where mean yield is around 5-6 tons/ha). And with satellites we can pinpoint where the laggards are and target studies on what might be constraining them. At least that's the idea behind some of our current work. 


Whether other regions would look much different than the 3 examples above, I really don’t know. But it shouldn’t be too hard to find out. With the current and upcoming group of satellites, we now have the ability to track performance on individual fields in an objective way, which should serve as a useful reality check on discussions of how to improve yields in the next decade.