Please let me know the optimization techniques which can be used while there are millions of updates and deletes on mongoDB database collection through abinito job. I am using mongodb bulk updates currently still performance is not up to mark, also there are multiple threads in abinitio job.
We have a replicaset with 3 nodes, we have take in account than when we make a lot of deletes the index grown a lot (1GB). We do not understand why a delete make an index grown. We want to understand how grow an index, it reserve space or it is a continuous growing?. Has anyone information about index growing?
MONGODB 3.2.8, WIREDTIGER STORAGE ENGINE SINGLE COLLECTION HAS MORE THAN 500G, WHETHER IT NEEDS TO BE SEPARATED?
WHAT PARAMETER SHOULD BE ADJUSTED ?
I have a DB with several (quite big) collections, on Mongo 3.2.5 and WiredTiger with snappy compression on. I would like to change the compression setting on one small collection I have to test it uncompressed. Is there a possibility to do that on a secondary without having to resync the whole DB?