First, on my 64-bit 2.6Ghz Intel Dunnington box (sage.math.washington.edu):
>>> import resource >>> def cputime(s=0): return sum(resource.getrusage(resource.RUSAGE_SELF)[:2])-s ... >>> t=cputime() >>> a=3**(10**6) >>> cputime()-t 0.53000000000000003 >>> b=a*(a+1) >>> cputime()-t 2.46
Next on my laptop, which is a top-end macbook air (64-bit intel core2 duo running 64-bit python 2.6.x):
>>> t=cputime() >>> a=3**(10**6) >>> cputime()-t 0.64609700000000003 >>> b=a*(a+1) >>> cputime()-t 3.1051849999999996
And, on the iPad
>>> t=cputime() >>> a=3**(10**6) >>> cputime()-t 2.3500000000000014 >>> b=a*(a+1) >>> cputime()-t 9.5899999999999963
Not bad!! Note that 32-bit versus 64-bit (and Python 2.5 versus 2.6) may be relevant, depending on how Python big integer arithmetic is implemented.
As a bonus, I have an older iPhone 3G (not 3Gs), where we get:
>>> t=cputime() >>> a=3**(10**6) >>> cputime()-t 7.6699999999999999 >>> b=a*(a+1) >>> cputime()-t
Wow, so the iPhone 3G at this benchmark is about TEN TIMES slower than the iPad. No wonder the iPad feels snappier.