At some point, you have the numbers (a,b) with a>b. After the first step these turn to (b,c) with c=amodb, and after the second step the two numbers will be (c,d) with d=bmodc.
Now think backwards. As d=bmodc, we know that b=kc+d for some k>0. The smallest possibility is k=1, therefore b≥1c+d=c+d.
From that result and from a>b we get a>c+d. If we add the last two inequalities we just derived, we get that (a+b)>2(c+d).
In words, after each two consecutive steps the sum of the two numbers decreases to less than half of its original size.
Now look at the very first step of the algorithm. At the beginning, we have some (a,b) with a>b. After the very first step we have (b,c) with c=amodb, and clearly b>c. Hence, after the very first step both numbers are at most equal to the smaller of the two input numbers.
Putting those two results together, we may conclude that the number of iterations (recursive calls in other implementations) is at most logarithmic in the smaller input number. In other words, the number of iterations is at most linear in the number of digits in the smaller input number.
To see that this analysis is asymptotically optimal, take (a,b)=(Fn+1,Fn) -- that is, two consecutive Fibonacci numbers.