top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

write functions to read and write in a hash table in a multi threaded environment.

+2 votes
722 views

write functions to read and write in a hash table in a multi threaded environment. Approach should give decent optimization in terms of time complexity.

posted Dec 31, 2014 by Aditya

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

0 votes

Looks to be a generic question and seems you are looking for the nbds.

The nbds page links to an excellent presentation on a real world approach (http://www.azulsystems.com/events/javaone_2008/2008_CodingNonBlock.pdf ) that allows growth / collisions.

Please comment if looking for something different.

answer Dec 31, 2014 by Salil Agrawal
I have to design a hash table my self that overcomes the limitation of whole table level lock. This part can be handled through bucket level locks. But this will also slow when hash table is huge. I am looking for a solution for multicore system.

Also I could not understood the link you are provided, I guess it is to hi-fi for me or may be I am a C programmer.
Check this which is simple https://www.cs.cmu.edu/~fp/courses/15122-f10/lectures/22-mem/hashtable.c
However this not threadsafe so you need to make it threadsafe which can be done using the following steps -

Get a mutex
pthread_mutex_t htable_mutex;

Initialize at the time of HT initialization using
pthread_mutex_init(&htable_mutex,NULL);

In the free, insert and search you need to use
pthread_mutex_lock(&htable_mutex);
pthread_mutex_unlock(&htable_mutex);

And while removing the HT u can delete the mutex using
pthread_mutex_destroy(&htable_mutex);

There is another example
http://c.learncodethehardway.org/book/ex37.html
I am not looking for the code but the approach.

This solution is too slow as only one thread can operator on the whole table at a time. Taking bucket level lock is much faster as compared to this. But when hash table is extremely large, we will have to have either too many buckets(too many comparisons to check which bucket does it belong) or long buckets(again slowing down).

One more efficient solution can be, to take multiple hash tables for each bucket and a master hash table for the lookup to bucket hash table to finally get the value against the key.

Just want to know if there can be anything better than this/
Similar Questions
+1 vote

I am working on integration of multiple GUI (Tkinter) elements, such a progress bar, label update, etc, and throughout my research i discovered that i need to have Threading modules used to distribute the calls for GUI update and processing of my main App.

My Application is broken into multiple Classes(modules), and i wanted to hear your thought on the best practices to implement the Threading model.

I was thinking to create a new py-module, and reference all other modules/class to it, and create thread.start() objects, that would execute the GUI part, other will handle GUI Updates, and other - will be doing the processing.

Please share some of your thought on this approach, or maybe you may suggest a more effective way.

...