Access the base tables directly using Informatica, limiting the extract to only the rows and columns you need.
I'd recommend unloading these to flat files before loading them into the Staging Tables (it gives you a point of recovery if something goes wrong in your Staging Table load, and means you don't have to hit the Siebel DB again).
Then from there you can either unload the staging tables, or just use your flat file extract, to generate your delimited files with row counts.
I tend to favour modular processes, with sensible recovery points, over 'streaming' the data through for (arguably) faster execution time, so here's what I'd do (one mapping for each):
1. Unload from Base Tables to flat files.
2. Join the flat file entities as required and create new flat files in the Staging Table format.
3. Load staging tables.
4. Unload staging tables (optional, if you can get away with using the files created in Step 2)
5. Generate .dat files in pipe-delimited format with the row count.
If the loading of a staging table is only for audit purposes etc, and you can base Step 5 on the files you created in Step 2, then you could perform stage (3) concurrently with stage (5), which may reduce overall runtime.
If this is a one-off process, or you just want to write it in a hurry, you could skip writing out the flat files and just do it all in one or two mappings. I wouldn't do this though, because
a) it's harder to test and
b) there are fewer recovery points.