Wednesday 31 October 2012

Session Log File Directory(Informatica Session Property Settings)

The Session Log File Directoryunder Informatica Session Property Settings gives the location for the session log file. By default, the Integration Service writes the log file in the service process variable directory, $PMSessionLogDir.
For example, in a session, you leave Session Log File Directory set to its default value, the $PMSessionLogDir server variable. For Session Log File Name, you enter the session parameter $PMSessionLogFile. In the parameter file, you set $PMSessionLogFile to “TestRun.txt”.
When you registered the Power Center Server, you defined $PMSessionLogDir as C:/Program Files/Informatica/PowerCenter Server/SessLogs. When the Power Center Server runs the session, it creates a session log named TextRun.txt in the C:/Program Files/Informatica/PowerCenter Server/SessLogs directory.
NoteIncase you enter a full directory and file name in the Session Log File Name field, then we have to keep this field empty.

Session Log File Name(Informatica Session Property Settings)

In ‘Session Log File Name’ option of Informatica Session Property Settings ’we have to enter a file name, a file name and directory, or use the $PMSessionLogFile session parameter.

You can use $PMSessionLogFile to change the session log name between sessions. In the General Options settings of the Properties tab, enter $PMSessionLogFile in the Session Log Filename field. Then define $PMSessionLogFile in the parameter file. When the Power Center Server runs the session, it creates a session log in the directory listed in the Session Log File Directory field and names the session log as instructed by the parameter file. If a session log with the same name already exists, the Power Center Server overwrites the existing file.

Write Backward Compatible Session Log File(Informatica Session Property Settings)

If Write Backward Compatible Session Log Fileoption is checked under Informatica Session Property Settings then it will write the workflow or session logs into a text file (You can find the Session Log under server directory: ...\infa_shared\SessLogs).

If the ‘Write Backward Compatible Log File’ option is not checked then the Integration Service creates the binary log only (session log files are in binary format by default.).So if you don’t give backward compatible session log files then it will give a binary format of the session log file.

What Is the SAS Metadata Server?

SAS Metadata Server
The SAS Metadata Server is the most important component of the SAS Intelligence Platform. Without a SAS Metadata Server, other client applications and SAS server will not function. Hence SAS Metadata Server is a crucial part.
The SAS Metadata Server supports three types of metadata repositories:

SAS Metadata Server
Foundation repository: Its as must and each metadata server has one foundation repository
Project repository: a metadata repository for isolated working
Custom repository: optional metadata store for a SAS metadata Server

In simple words SAS Metadata Server is centralized repository used to store metadata. It stores the metadata of all the Metadata Repositories and serves it across the entire SAS Intelligence Platform

A SAS Metadata Server was introduced to maintain all the information about the underlying data files used by the SAS applications in a centralized location. The SAS Metadata Server gives a single integrated metadata and provides consistent information so that all users can get a consistent data. Using SAS Metadata Server clients can read metadata from and write metadata to one or more SAS Metadata Repositories

Use of Go URL in OBIEE

The Go URL is a part of the saw URL and is used to,
  • add an answer in your favorites,
  • link an answer to another answer
  • issue SQL
  • pass filters
  • integrate an answers into an external Web site
  • generate the result data as XML

Tuesday 30 October 2012

Dropping of a Constraint (Scenarios to Drop a Constraint)




Alter table <table_name> drop constraint <constraint_name>;

Although constraints are used to validate the data entered into the database, in certain situations, it would be more efficient if the constraints were temporarily disabled until a particular function has been performed.  In yet another situation, it might be necessary to forgo validating the data entered into the database.  For such situations Oracle provides the various constraint states, which can be used in combination as well.  These states of the constraints can be set either during the table creation or the table can be altered later to modify them.

SCENARIO 1:
Suppose we decide to shift the data of a child table to another database, we cannot do that as it has links with the parent table of this database.  Hence data cannot be moved from the child table alone.  To facilitate this we disable the foreign key for the duration of the shift. And enable it again to establish the link again.  The command for disabling a key is,

Alter Table <Table_Name> Modify Constraint <Constraint_Name> Disable;
Alter Table <Table_Name> Modify Constraint <Constraint_Name> Enable;

See the following commands.
Select Constraint_Name,Constraint_Type, Status From User_Constraints Where Table_Name='EMP';


CONSTRAINT_NAME                   C STATUS
------------------------------ -    -------
PK_EMP                                P ENABLED
FK_DEPTNO                             R ENABLED


Alter Table Emp Modify Constraint SYS_C00253425 Disable;
Table altered.

Select Constraint_Name,Constraint_Type, Status From User_Constraints Where Table_Name='EMP';

Alter Table Emp Modify Constraint SYS_C00253425 Enable;
Table altered.

Select Constraint_Name,Constraint_Type, Status From User_Constraints Where Table_Name='EMP';


CONSTRAINT_NAME                       C STATUS
------------------------------        - --------
PK_EMP                                 P ENABLED
FK_DEPTNO                              R ENABLED


SCENARIO 2:
Now consider another situation, where huge amount of data is being moved from one table to another.  The data present in the old table are valid and conform to the primary key in one of its columns (self referential).  When the data enters the new table, it will undergo validation as per the primary key in its column.  As the data is already a validated data we can disable the foreign key and after the shift of the data we enable it once again, to avoid the unnecessary waste of time by way of overhead, while rechecking.

You can read more about constraints here Types of Constraint States used in ETL

Monday 29 October 2012

Types of Constraint States used in ETL

Disable Constraint:  When the constraint is in the disable state, although the constraint continues to reside in the data dictionary, all the data get entered into the database without being checked by the constraint.

Enable Constraint:  When the constraint is in the enable state, data is not entered into the database unless it fulfills the condition of the constraint.  This is the default state of the constraint.

Validate Constraint:  When the constraint is in the validate state, the constraint ensures that the data entered conforms to the constraint specifications.

Novalidate Constraint:  When the constraint is in the novalidate state, data entered hereafter will be checked to conform to the constraint specifications.  This means that data entered earlier, which did not conform to the constraint might exist in the table.

Combination of the constraint states:
  • Enable, disable, validate and novalidate are independent properties of the constraints and various combinations of the above can be crated according to the requirement.
  • When the constraint is enable validate, it is the same as enable and all data present in the table must conform to the constraint specifications.
  • When the constraint is enable novalidate, existing rows may violate the constraint but any new rows inserted or modified must conform to the constraint specifications 
  • When the constraint is disable validate, it ensures that all the data in the table conforms to the constraint specifications.  [While enabling the constraint after a time period of disable, the data entered during the disable period will be validated or checked]. 
  • When the constraint is disable novalidate, it is the same as disabled and the data entered is not ruled by the constraint specifications.  This state is useful when large amount of data has to be loaded which has already been validated and time is saved, as it need not be revalidated.

See the following situation,
The max_level column of the itemfile table is having the check constraint as check(max_level<500).  Now we drop the check constraint using the drop command as follows.
Alter Table Itemfile Drop Constraint Max; 
Subsequently, we alter the table and apply the following check constraint.
Alter Table Itemfile Add Constraint Max1 Check(Max_Level<400);

Now oracle will give an error check constraint violated.  This is because the table has data in that column following the earlier constraint set.  If we apply the new constraint as follows the validations will only be for the future entries after the new constraint application.  The command is,
Alter Table Itemfile Add Constraint Max1 Check(Max_Level<400) Novalidate;
The items already existing with the max_level between 400 and 500 will remain as it is.  Only fresh data inserted will be validated to conform to the constraint specifications. 

Related Posts Plugin for WordPress, Blogger...

ShareThis