Everybody! This is important. In a few days, these forums will be moving over to using the totally sweet Discourse platform. To ensure this migration happens smoothly with no loss of content, these forums are currently in a read-only mode. I do apologize for the inconvenience.

There is never a good time to turn the forums off for an extended period of time, but I promise the new forums will be a billion times better. I'm pretty sure of it.

See you all on the other side in a few days, and if you have any (non-technical) questions, please e-mail me at kirupa@kirupa.com. For technical questions, try to find a tutorial that corresponds to what you are looking for and post in the comments section of that page.

Cheers,
Kirupa

Results 1 to 5 of 5

Thread: unsigned vs signed

  1. #1

    unsigned vs signed

    I know this is not specifically AS3 related, but it's close.

    In a lot of programming languages, and ESPECIALLY in Actionscript 3, there's a huge difference in processing time between unsigned and signed integers.

    Now, to me, it's just simply logical that an unsigned integer (uint) should be faster. It's a less dynamic type since it can only hold positive numbers and should logically reserve less memory resources.

    So why are signed integers (int) so much faster? I don't get it. The difference is enormous! At least on a micro scale.

    Any insight is welcome.
    Thanks

  2. #2
    I'm also interested in knowing what's up with AS3's uint slowness.

    It's a less dynamic type since it can only hold positive numbers and should logically reserve less memory resources.
    They're both 32-bit integers, why would unsigned integers take up less space? The sign bit is included as one of the 32 bits.

    What other languages have slow uints? (I haven't heard of similar speed issues in other languages, but I haven't been looking.) In a lower level language like C, at least, uints and ints seem to be equal. I used this code and a copy of this code only using signed integers instead:

    Code:
    int main(){
    	unsigned int i = 0;
    	unsigned int j = 0;
    	unsigned int k = 0;
    	for(i = 0; i < 100000; i++){
    		for(j = 0; j < 10000; j++){
    			k = i + j;
    		}
    	}
    }
    Quote Originally Posted by bash
    $ time ./speed_int

    real 0m3.689s
    user 0m3.674s
    sys 0m0.007s

    $ time ./speed_uint

    real 0m3.689s
    user 0m3.675s
    sys 0m0.007s
    Unless gcc is doing some optimization that I don't know about, then it seems like these two program took about the same amount of time even though two different integer types were used.

  3. #3
    Yeah you're completely right, they are both 32 bit. My bad. I wasn't quite thinking when I wrote that.

    Question still remains though

  4. #4
    Well I'd assume a check is performed to see if the number provided is greater or equal to 0 when using uint, which may result in a slower speed.

  5. #5
    I believe the problem with uint comes in casting. There's a bit of casting going on behind the scenes when working with numbers and I think for some reason uints take slightly more effort (for some reason beyond me).

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

Home About kirupa.com Meet the Moderators Advertise

 Link to Us

 Credits

Copyright 1999 - 2012