The forums have permanently moved to forum.kirupa.com. This forum will be kept around in read-only mode for archival purposes. To learn how to continue using your existing account on the new forums, check out this thread.

Thread: 32 bit versus 24 bit uints

1. 32 bit versus 24 bit uints

Hey everyone! Does anyone know offhand if there is a way to check if an unsigned integer has a 32 bit value versus a 24 bit one? This would be specifically in reference to a color value w/ alpha versus one w/o. Meaning, could I find that this...

0xFFFFFFFF

is different from this...

0xFFFFFF

Or that this...

0x00000000

is different from this...

0x000000

I realize that in an example like that, the end numerical value is the same, but there are times when it'd be nice to know if a value passed into a function originated as a 32bit uint or a 24bit, even if the value of both were the same. Thanks!

2. To elaborate, if you did this...

Code:
```var color:uint = 0x22FFFFFF;
var theAlpha:uint = color >> 24 & 0xFF;
trace(theAlpha);```
...it would trace 34. That's super - I know it was a 32 bit value that was defined. But if you compared these two values...

Code:
```var color32:uint = 0x00FFFFFF;
var color24:uint = 0xFFFFFF;

var alpha32:uint = color32 >> 24 & 0xFF;
var alpha24:uint = color24 >> 24 & 0xFF;

trace(alpha32 + ", " + alpha24);```
Both will come back as 0. The 32 bit value had the alpha defined as 0, whereas the 24 bit value didn't specify, so the bits extracted are empty (which is 0).

So this method is great for all color values except when you specify 0 alpha in the color. At that point you can't discern between a 32 bit value and a 24 bit one. So, does anyone know of another method I might use?