I have taken the code for running sensitivity analysis on NDR for an older version of InVEST and attempted to update it to the current version for use on Phosphorus with my data. I am having this issue where the results are not accurate (they go up and down as p_load increases). I ran the same exact data in the GUI version of InVEST and got more realistic results which the code never produced. Upon further examination, I found a discrepancy in the effective retention data layers. In the coded output, the data layer is either equal to the eff_p layer, 0, or NoData (the layer from the GUI is correct). This translates to holes in the NDR layer, which seems to be causing what I’m seeing, however I have no clue how to fix this. There may be other issues causing this weird data results, but I imagine the nodata in the NDR layer is a large driver.
I have attached a link to the code I am working out of that is having issues, I cut most of the code out to just a loop for looking at each load_p for each lulc individually. There may be some weird things in the code that have been added and are probably unneccessary in an attempt to fix this, however I was having this issue from the beginning when the code was identical to the original code (except minor necessary changes). Sensitivity analysis script for running InVEST NDR with a range of values for different parameters · GitHub (the original code).
Anyway, any help with this would be appreciated!
(code is being ran with InVEST 3.18.0, the code doesn’t generate log files, but if that would help i can attempt to get those)
It’s not quite clear what problem you are asking about. You say,
indicating a possible problem with the code that is setting up the different model runs?
But you also say,
Can you isolate this problem to a single model run? And then perhaps even reproduce it in the GUI? That would make it significantly easier to troubleshoot this.
I adjusted the code so it just changes the load_p value for lucode 82 (multiplies value by 0.5), then finishes.
As apart of the code it adjusts the biophysical table and saves the adjusted one in a new folder. I used the exact same inputs for the GUI run that I use in the code, including using the biophysical table that the code created, to run the model from the GUI. Both are using InVEST 3.18.0.
The errors I’m seeing are different total P export for the watershed between the two within the .gpkg (1083 and 1414).
When I dig deeper, the effective retention layer for the python code version has no data values and seems to have not calculated correctly. The no data holes also appear in the NDR layer (though I imagine stemming from the effective retention layer), and these no data holes allign with where the flow direction layer has a value of 0. The flow direction layers appear to be identical from the code run and GUI run.
In the github link, I have added the updated code for the single model run, along with the pertinent output files from python code and GUI (watershed results, effective retention, ndr, and flow direction). Below is the log from the GUI run. I hope this clears things up a bit and helps.
Thanks, @cobleer1 , that does help. If you’re getting different results from two model runs that you think should be identical, there are usually two possible explanations:
The input data and parameters are not actually identical.
The model versions are not the same (or other differences in the python environment).
The second one is the easiest thing to check, so I would double-check that first. Your logfile from the Workbench shows invest version 3.18.0. You can check the invest version used by your script like so:
If the versions match, then I would suggest continuing to simplify your script to get down to a minimal example that reproduces the problem. I know that may feel tedious, but it is often the most efficient way to find the problem. For example, you could remove the for loops and just execute a single model run with the same parameters you used in the Workbench.
In fact, you can export a Python script from the Workbench with all these parameters ready to go. If you run that script in the same environment that you run your sensitivity script and the results still don’t match, then the problem is different software versions (either natcap.invest or possibly one of natcap.invest dependencies).
Thank you so much, when running the script directly from InVEST, it had the same issues. I created a new conda environment and reinstalled invest package and it works now (I think, though not sure, it was because some packages were installed with conda-forge and some with pip).
Thanks again, you have saved me a lot of time and headache.