Correct me if I'm wrong but the bit depth (A/D conversion) how much the CCD camera electronics can differienate between how full the well is based on the equation 2^{n}-1 (12-bit=4,095, 14-bit=16,383, 16-bit=65,535).

If the full well capacity is 100,000 e-, then a 16-bit camera should have a Gain of 1.53 e/ADU. But practically, this level of precision isn't possible if the dynamic range doesn't match it. If the read noise is 10 -e, then the dynamic range is full well ÷ read noise. For this hypothetical CCD camera, the dynamic range is 10,000 or 13.3 stops (equivalent to 13.3 bits). the level of precision in the measurement is less that 1/6th what it's bit-depth is capable of. IOW: a 16-bit camera with a dynamic range of 10,000 is no more precise than a 14-bit camera with the same dynamic range. However, a 12-bit camera would have less precision than the bit-depth would not be able to parcel out the output of the CCD camera as precise as the dynamic range would allow. I asked this because I was comparing the dynamic range the ASI183MM (12-bit) with 16-bit CCD cameras.

Source: https://www.photometrics.com/resources/learningzone/dynamicrange [4]

If there is some other factor that makes getting the 16-bit CCD camera advantageous? Is this example wrong? Let me know.