I'm not much of a mathematician and most of the statistics talk is over my head, so take this for what it's worth. I'm just throwing a layman's idea out there to hopefully spark the real genius of others since the project has slowed down.
I've been looking at it like this:
Display, Actual, Disparity 9, 7.84% -1.16 23, 21.86% -1.14 27, 29.63% 2.63 36, 34.16% -1.84 41, 43.66% 2.66 49, 51.7% 2.7 56, 58.17% 2.17 63, 70.27% 7.27 66, 72.22% 6.22 75, 82.55% 7.55 82, 90.77% 8.77 94, 99.15% 5.15
In a 2RN system, what I'm calling the disparity should have roughly as many negatives as positive values below and above ~50 Hit. But with a few exceptions, for the high confidence values they're mostly positive and the disparity increases with HIT. This makes me wonder if a 3rd value is being added. Of course, because 2RN diverges from 1RN more at the extremes, it couldn't be a static value.
Maybe something like: ((3A+B)/4) + 200/(HIT+50)
In a 2RN system, +200/HIT+50 would bolster the lower values and become more negligible as HIT increases. The +50 to HIT would prevent absurd values from occuring at HIT<10.
I don't know how to model this so it's just something I think might represent the data. I'll try gathering outcomes at 1 HIT and see if that reveals anything; in a pure 2RN system it should be close to 0% whereas with a +modifier it should return the value of the modifier.