Hi! I’ve been exploring the outputs of the SDR model and I want to check whether my understanding is correct.
I’m working at the watershed scale, and my assumption is that if I sum all pixel values from the E’ layer (e_prime.tif), this total should match the sum of all pixel values in the sediment deposition layer (sed_deposition.tif). In other words, I expected the total amount of sediments that do not reach the stream (E’) to equal the total amount that ends up being deposited within the watershed.
Is this assumption correct?
When I perform this calculation, the sums don’t match. In fact, for all the watersheds I’ve tested, the total E’ is consistently higher than the total sediment deposition. I’m trying to figure out whether this difference is expected based on how the model works (and I might be misunderstanding something) or there might be an error in my processing.
Could someone help clarify whether these two layers should, in principle, add up to the same total? And if not, why might E’ typically be larger than sediment deposition?
Hi! I wanted to follow up on my previous post above with some additional findings after further testing of the SDR model.
While re-running and closely inspecting outputs, I noticed something unexpected that might help explain part of the discrepancy I described earlier between total E’ (e_prime.tif) and total sediment deposition (sed_deposition.tif): even when using the Sample Data provided by the NatCap, I observed that the sediment deposition layer contains non-zero values in pixels that are classified as streams. That is, in locations where the model identifies streams according to streams.tif, the sed_deposition.tif raster still shows sediment deposition values. Conceptually, this was surprising to me, since my understanding is that sediment deposition (or trapping) should occur before sediments reach the streams. Because of this, I am wondering whether this behavior reflects:
an expected aspect of the SDR model implementation that I may be misunderstanding, or
a potential issue in how sediment deposition is being computed or routed internally.
To explore this further, I tried reproducing the sediment trapping calculation outside the main InVEST workbench. In this external implementation, before running the deposition algorithm, I explicitly set flow direction values to zero in pixels already identified as streams. Doing this eliminated sediment deposition values within stream pixels, which seems more consistent with the conceptual model.
However, even after this correction, I still find that the total sum of E’ remains larger than the total sediment deposition. This makes me suspect that there may also be other contributing factors, such as:
some pixels not being treated as true high-point pixels in the routing logic, or
other edge cases in how contributing areas or flow paths are defined.
So, any insights, confirmations, or pointers to relevant documentation or code sections would be greatly appreciated.
Thanks for the post and really getting into some of the details related to SDR. I reached out to some NatCap SDR experts and they unfortunately can’t take a proper look at this currently. There are good software implementation questions in here as well and I wonder if @esoth might be able to weigh in on some of the routing questions. We’ve got this pinned for us to review further, sorry we can’t get to it immediately!