top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

how to get binary equivalent of a decimal number in python

+1 vote
642 views

I need to get 32 bit binary equivalent of a decimal and need to change the 0's to 1's and 1's to 0's

For Example
if the input is 2 
Output should be:
the 32bit equivalent of 2 :0000 0000 0000 0000 0000 0000 0000 0010
and the 1's compliment is:1111 1111 1111 1111 1111 1111 1111 1101

is there any pre-defined function to get the above results in python??

posted May 23, 2013 by anonymous

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

0 votes

I'm curious as to the intent of the assignment. Are you supposed to be learning about base conversion, about ones and twos complement, or about Python?

Assuming the intent is to learn about Python, the built-in function bin() will take a Python integer (which is not decimal) and convert it to a str. At that point, you can manipulate the string any way you like.

x = 45
print bin(45)
0b101101

Perhaps you want to start by stripping off the leading '0b' using a slice. Then you want to pad it to 32 columns by prepending some number of zeroes. Then you want to insert some spaces at regular intervals.

Presumably doing the ones-complement operation on that string is then pretty easy for you.

answer May 23, 2013 by anonymous
Similar Questions
+1 vote

I have about 500 search queries, and about 52000 files in which I have to find all matches for each of the 500 queries.

How should I approach this? Seems like the straightforward way to do it would be to loop through each of the files and go line by line comparing all the terms to the query, but this seems like it would take too long.

Can someone give me a suggestion as to how to minimize the search time?

+2 votes

I am trying to test measure some IO execution in milliseconds , but bit confuse about best method achive that under windows 7. I am using following code but not sure if its best or correct way since i have two different results, which one should i take as results and which is best way. Please advice

code exec results 
100000000 loops, best of 3: 0.00925 usec per loop
3.827947176762156

run code
 import timeit ,time
 import datetime
 import os

 def main():    
 os.path.getsize("c:/video-2011-09-09-09-32-29.mp4")  
 if __name__ == '__main__':    
 t = timeit.Timer('main()')    
 print (t.timeit(1))
+5 votes

Using 1/3 as an example,

 >>> 1./3
0.3333333333333333
 >>> print "%.50f" % (1./3)
0.33333333333333331482961625624739099293947219848633
 >>> print "%.50f" % (10./3)
3.33333333333333348136306995002087205648422241210938
 >>> print "%.50f" % (100./3)
33.33333333333333570180911920033395290374755859375000

which seems to mean real (at least default) decimal precision is limited to "double", 16 digit precision (with rounding error). Is there a way to increase the real precision, preferably as the default?
For instance, UBasic uses a "Words for fractionals", f, "Point(f)" system, where Point(f) sets the decimal display precision, .1^int(ln(65536^73)/ln(10)), with the last few digits usually garbage.
Using "90*(pi/180)*180/pi" as an example to highlight the rounding error (4 = UBasic's f default value):

 Point(2)=.1^09: 89.999999306
 Point(3)=.1^14: 89.9999999999944
 Point(4)=.1^19: 89.9999999999999998772
 Point(5)=.1^24: 89.999999999999999999999217
 Point(7)=.1^33: 89.999999999999999999999999999999823
 Point(10)=.1^48: 89.999999999999999999999999999999999999999999997686
 Point(11)=.1^52: 89.9999999999999999999999999999999999999999999999999632

If not in the core program, is there a higher decimal precision module that can be added?

...