High iowait on the MongoDB process implies that MongoDB is waiting for the disk to read/write data. Are the resources consistently not being utilized, or does this only happen for a period of time after a cold start?
There are a number of potential reasons for your initial performance issues, but collecting more detailed metrics may be able to provide a hint:
Does this happen after a cold start of the machine or mongod? E.g., does restarting mongod exhibit this issue as well?
Do you have multiple mongod running in the machine? E.g. on different ports, using Docker, or any virtualization?
Do you see anything suspicious in the server logs or the mongod logs? You could try using mtools which is a collection of tools to analyze MongoDB deployment by analyzing the log files, for example: mloginfo --queries, mplotqueries, etc.
Can you include the output of explain(true) for the query?
What is the storage engine used: MMAPv1 or WiredTiger?
What size and type of SSD is provisioned on the server?
How many times do you need to run the query before it gets to an acceptable level of performance, and does the performance increase significantly? Is there any difference in the log files between the two cases?
What is the size of the collection?
You may find the following links useful:
Optimizing Persistent Disk and Local SSD Performance
Optimization Strategies for MongoDB
Running MongoDB 3.0.10 on Ubuntu 14.04
I would recommend to upgrade to the latest 3.0 version, which is currently at 3.0.12 for bugfixes and improvements.