top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Changing Date formats in Target Tables in informatica

0 votes
578 views

In informatica i receive dates from flat files in the format of dd-mm-yyyy and dd/mm/yyyy i need to convert all date to one format i.e dd-mm-yyyy using any expression and push into target ,so no rows gets rejected.how to proceed with them?

posted Sep 16, 2014 by Sachin

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

0 votes

If you are handling data as "date" data type, you should have no problems with respect to rejections.

About handling a particular date format string, you can look at the properties at the integration service level, there you can control the date format (default).

answer Sep 18, 2014 by Shweta Singh
Similar Questions
+2 votes

Suppose we have a Source Qualifier transformation that populates two target tables. How do we ensure TGT2 is loaded after TGT1?

+1 vote

In my source table data is

s_name,p_name,value
s1 ,   p1,     10
s1 ,   p2,     xyz
s1 ,   p3 ,    abc
s2 ,   p1 ,    20
s2 ,   p2 ,    xyz
s2 ,   p3 ,    abc

I want two target tables, first table is based on s_name s1, second table based on s_name s2. Both table contains contains p_name and value.

The target table data like as

Table s1

p_name,value 
p1,    10
p2,    xyz
p3,    abc 

Table s2

p_name,value 
p1,    20
p2,    xyz
p3,    abc 
0 votes

I have some files that I would like to consolidate into a single database table. The files have similar but different formats. The files look something like this:

FileOne:
•ColA : string
•ColB : string
•ColC : string

FileTwo:
•ColAA : string
•ColBB : string
•ColCC : string

FileThree:
•Col01 : string
•Col02 : string
•Col03 : string

The destination table looks like this:

TableDestination:
•ColFirst : string
•ColSecond : string
•ColThird : string

I want to develop a mapping that ETLs these three files into this one database, but because the column names are different, it looks like I'll have to develop three different mappings, or three different sources, or three different somethings. The problem is that my example is contrived: I actually have many different files that all have different formats and column names, but the data is all very similar.

I would like to develop a single mapping or workflow that can handle all of this by only adding a table that holds the column mappings. Such a table would look like this based on the sample files and sample table above:

TableMappings:
enter image description here

In this way, to edit a column mapping I only have to make an edit this this TableMappings table. I wouldn't have to make any changes at all to the mapping or workflow. Nor would I have to redeploy an application.

What would a mapping or workflow look like that could take advantage of something like this? I assume there'd be a flat file source that takes files from a folder. There would be something in the middle that uses this TableMappings table to map column names. Finally there would be a relational data object that represents my destination database table "TableDestination". I don't know how to put this together though.

+2 votes

I have some mappings, where business entities are being populated after transformation logic. The row volumes are on the higher side, and there are quite a few business attributes which are defaulted to certain static values.

Therefore, in order to reduce the data pushed from mapping, i created "default" clause on the target table, and stopped feeding them from the mapping itself. Now, this works out just fine when I am running the session in "Normal" mode. This effectively gives me target table rows, with some columns being fed by the mapping, and the rest taking values based on the "default" clause on the table DDL.

However, since we are dealing with higher end of volumes, I want to run my session in bulk mode (there are no pre-existing indexes on the target tables).

As soon as I switch the session to bulk mode, this particular feature, (of default values) stops working. As a result of this, I get NULL values in the target columns, instead of defined "default" values.

I wonder -
1.Is this expected behavior ?
2.If not, am I missing out on some configuration somewhere ?
3.Should I be making a ticket to Oracle ? or Informatica ?

my configuration -

Informatica 9.5.1 64 bit, with Oracle 11g r2 (11.2.0.3) running on Solaris (SunOS 5.10)

Looking forward to help here.

+2 votes

I have some mappings, where business entities are being populated after transformation logic.

The row volumes are on the higher side, and there are quite a few business attributes which are defaulted to certain static values.

Therefore, in order to reduce the data pushed from mapping, i created "default" clause on the target table, and stopped feeding them from the mapping itself. Now, this works out just fine when I am running the session in "Normal" mode. This effectively gives me target table rows, with some columns being fed by the mapping, and the rest taking values based on the "default" clause on the table DDL.

However, since we are dealing with higher end of volumes, I want to run my session in bulk mode (there are no pre-existing indexes on the target tables).

As soon as I switch the session to bulk mode, this particular feature, (of default values) stops working. As a result of this, I get NULL values in the target columns, instead of defined "default" values.

My question is-
1.Is this expected behavior ?
2.If not, am I missing out on some configuration somewhere ?
3.Should I be making a ticket to Oracle ? or Informatica ?

My configuration -

Informatica 9.5.1 64 bit, with Oracle 11g r2 (11.2.0.3) running on Solaris (SunOS 5.10)

Looking forward to help here...

...