Currently, an approximation to calculate the energy/capacity of a battery is done by measuring its internal resistance, and assuming it to be proportional to the load.
But this method, can have a random error of up to 20%, so it is unreliable. If we measure the internal resistance of a sample of equal batteries, we notice that there is a dispersion following a Gaussian bell.
The same dispersion will exist if our method is used to measure the capacities. On the other hand, the method we are proposing, coming from a proven comparison, has no error and is completely reliable.
When the manufacturer gives us a discharge curve, it does not mean that all batteries of that model, even from the same batch, will follow it.
The reason is that not all batteries are the same. They are affected by accepted manufacturing tolerances. As well as, by that of the measuring equipment.
Many manufacturers get the curve from three measurements, over several cycles, or between batteries close to the standard.
So, in the worst case, we have to add up all the tolerances, although such an extreme measurement is unlikely.
We can also make our own comparative measurement sample by measuring a complete discharge, and checking the voltage response at the appropriate times.
In this way, the only errors in our results will be those of the measuring equipment.
Is this method more accurate and reliable?