- Data Deluge: Amount of digital data created in the world right now stands at 7 Zettabytes per annum (1 Zettabyte = 1 Trillion Terabytes)
- Social Media: Facebook has touched 1 Billion users which makes it the 3rd largest country in the world
- Cloud: Tremendous amount of cloud infrastructure is being created
- Mobility: There are 4.7 billion mobile subscribers which covers 65% of world population
Monday 19 November 2012
Transitioning to a New World – An Analytical Perspective
Tuesday 16 October 2012
Collaborative Data Management – Need of the hour!
Wednesday 12 September 2012
Hexaware sees strong order pipeline; 20% growth: Nishar
Atul Nishar, chairman, Hexaware, says that we remain quite positive on growing at 20% or more. We feel that if the situation improves with US elections and no debacle in Europe then the environment could only improve.
He also says that currently there are five deals in the pipeline and one is in the advance stage. The deals are spread across from the United States and Europe, and across major verticals like capital markets, travel and emerging verticals. And in the last nine quarters the company has signed seven large deals.
Below is the edited transcript of his interview to CNBC-TV18.
Q: Hexaware recently had a deal and there have been reports or analyst notes which suggest that the company is in conversation with potential clients for four deals and one is in advance stages. Do you think something could fructify in the near-term?
A: Currently, there are five deals in the pipeline and one is in the advance stage. The deals are spread across from the United States and Europe, and across major verticals like capital markets, travel and emerging verticals. And in the last nine quarters we have signed seven large deals.
Q: Are billings under pressure even if the deals are coming? Are they coming from tight fisted managements?
A: In over last two years, we have marginally improved our average billing on both on onsite and offshore. We don’t see any pressure on pricing on the IT industry. Repeatedly, we have guided that our pricing should be assumed to be stable.
The important point is that the client want value, greater performance, result oriented projects and fixed priced or greater commitment by off shoring companies. Clients do want to cut their costs and get more value, but they also know if it is all done at the cost of the service provider, it will not sustain that particular situation.
Q: How much do you think is Nasscom’s 13-14% growth target under threat? What might it fall to half or high single digits?
A: Nasscom has guided for 11-14% and it is a wide enough range. In the industry we saw that some companies like mid-sized companies and companies who are scale players have also done very well. It is a mixed reason. We have seen more client specific issues coincidence for downsizing for whatever reason that may dent revenue that doesn’t mean they will not be able to grow in future.
Q: Do you think Nasscom will hold the lower end of their 11% range?
A: That is the current optimism. So, there is no reason to believe that there is material change from the guided number.
Q: The one concern around Hexaware has been for some time that you have seen an improvement in margins, but going forward it would come under pressure because in Q3 wage hikes are expected to shave off margins to a certain extent. How do you respond to that?
A: In Q2, ours being calendar year, Hexaware reported 22.9% EBITDA which was higher than Q1. We gave normal 10% increment to all our off shore employees. The impact was absorbed in our margin and in spite of that the margin improved.
We also absorbed the significant visa costs that traditionally come in that quarter. In the coming quarter there will be onsite increase in wages. For off shore workers the date of increment is April 1 and for onsite employees the date is July 1, which remains unchanged. And we feel with this we can guide stable margins.
We are proud that at Hexaware, we have grown at higher than the industry average at good margins. We don’t believe in taking new deals by compromising on margins in any manner.
Q: So at this juncture you don't want to change your guidance of 20% dollar revenue growth any which way, up or down?
A: We remain quite positive on growing at 20% or more. We feel that if the situation improves with US elections and no debacle in Europe then the environment could only improve.
Wednesday 22 August 2012
Job: Peoplesoft Tester In Chennai
Title | Peoplesoft Tester |
Categories | |
Grade | G4 |
Skill | Peoplesoft, HRMS Testing, Payroll |
Start Date | 21-08-2012 |
Location | Chennai |
Job Information | 3-5 years of experience in ERP Related Product Testing. |
Knowledge of complete testing life-cycle and different testing methodologies. | |
Min. 2 – 3 years of hands on experience on PeopleSoft – HRMS. | |
Min. 1 year of experience on writing Test Scripts on PS Payroll Module. | |
Good knowledge on HP QC. | |
Strong analytical and troubleshooting skills. | |
Unit | 10 |
Friday 10 August 2012
Short-term contracts give mid-cap IT cos new lease of life
With the duration of outsourcing deals getting shorter, deals worth nearly USD 85 billion are up for renegotiations this year, reports CNBC-TV18’s Shreya Roy.
Over the last few years, uncertain times have forced IT companies to go in for more short-term contracts. For mid-cap IT companies, this may have been a blessing in disguise.
Data from outsourcing advisory firm TPI says that around 700 contracts will be up for renegotiations this fiscal year, compared to 530 last year.
“There is a significant reduction in the tenure of contracts as they were originally signed. Compared to 10 years ago, when 500 of these were being done, there are 1000 a year. The tenure has gone down to five years instead of seven, so a lot of deals are naturally coming back to the market as renewals. In itself, this is a very large opportunity,” said Siddharth Pai, partner and MD at TPI India.
For many IT players, this may be just what the doctor ordered. After all, renewals account for almost 65% of the outsourcing market. Advisory firm Everest estimates that by October 2013, deals worth nearly USD 85 billion will be up for renewal.
These include a contract between HP and Bank of America, a mega deal from Shell group which is currently with AT&T, HP, and T-Systems, a blue cross blue shield deal with Dell and Manu Life's deal with IBM.
Many of these contracts are expected to be broken up into smaller chunks, as outsourcers are looking increasingly towards multi-sourcing. Analysts say this could work in the favour of the smaller players, especially those like Mindtree and Hexaware, which have been focusing on developing niche capabilities to help differentiate from larger players.
Tuesday 7 August 2012
Hexaware Technologies :Riding High! --nirmal bang,
Riding High!
Hexaware Technologies Limited (HTL) is a mid-sized IT company mainly catering to the capital markets (BFSI) and the airline (transportation) sector. It also focuses on enterprise software provided by PeopleSoft and Oracle. Recent large client wins has bought back the focus on this company which has good expertise in the niche areas.
Investment Rationale
Improved Revenue visibility due to large wins in the past 5 quarters
EBIDTA margins have improved 812 basis points in the past 5 quarters led by drastic control in the operating costs. The company has in addition utilized its offshorablity lever in its advantage by moving almost 14% of work offshore during the same period. Currently, onsite: offshore mix stands at 53:47, utilization in early 70’s and plans to hire freshers would further aid the margins going forward. We expect HTL to report EBIDTA margins of 20% + in CY12E and CY13E. Proficiency in niche segments paying off
HTL earns 60% of its revenues from the Capital Markets and Travels industries and almost 30% of revenues come from enterprise solutions in terms of its service lines. In enterprise solutions, 60-65% of its revenues are from PeopleSoft where other software vendor’s focus is less. Guidance Revision of 20% on USD revenues for CY12E
On the back of good deals won recently, the company has revised the revenue guidance in USD terms to 20%. We feel this is a little conservative and the company can easily beat the guidance for CY12E.
Valuation & RecommendationWe expect HTL’s revenues to grow at a CAGR of 25% and adjusted profits to grow at a CAGR of 21% over CY11-CY13E. Margin improvement would remain under focus and we expect HTL’s EBIDTA margins improving by 313bps to 21.2% in CY13E from 18.03% in CY11. At CMP, the stock is trading at 10.4x and 8.6x for CY12E and CY13E respectively. On the back of improved financials and good revenue visibility, we recommend a BUY on the stock, assigning a target multiple of 11x for CY13E EPS with a price target of Rs. 147 which is a potential 28% upside.Risks to our Rationale:
Concentration in Discretion spending Revenues
Hexaware derives more than 50% of its revenues from Enterprise solutions and Business Intelligence and Analytics which could get affected in economic downturn. However, the recent deal wins re-affirms the revenue visibility for the company for CY12E. Industry Risks of wage pressures, rupee appreciation and competition
Rupee depreciation has acted in favor of the company and Industry per say. Any severe reversal of the rupee trend would affect the prospects of the firm. Exposure in the European Region
The company has 28.4% exposure in the European region and few of the major deals have been signed with clients in this region. Looking at the current economic scenario prevailing in the Euro zone, any delay in commencement of these deals or cancellation may impact the margins severely.Valuation & Recommendation
We expect HTL’s revenues to grow at a CAGR of 25% and adjusted profits to grow at a CAGR of 21% over CY11-CY13E. Margin improvement would remain under focus and we expect HTL’s EBIDTA margins improving by 313bps to 21.2% in CY13E from 18.03% in CY11. At CMP, the stock is trading at 10.4x and 8.6x for CY12E and CY13E respectively. On the back of improved financials and good revenue visibility, we recommend a BUY on the stock, assigning a target multiple of 11x for CY13E EPS with a price target of Rs. 147 which is a potential 28% upside.
Tuesday 15 March 2011
Configuring Informatica File Transfer Protocol
- Create an FTP connection object in the Workflow Manager and configure the connection attributes
- Configure the session to use the FTP connection object in the session properties.
- Specify the Remote filename in the connection value of the Session properties.
- Specify the source or target output directory in the session properties. If not specified, the Integration Service stage the file in the directory where the Integration Service runs on UNIX or in the Windows System directory.
- Session cannot run concurrently if the same FTP source file or target file located on a mainframe.
- If a workflow containing a session that stages an FTP source or target from a mainframe is aborted, then the same workflow cannot be run until it’s timed out.
- Configure an FTP connection to use SSH File Transfer Protocol (SFTP) while connecting to an SFTP server. SFTP enables file transfer over a secure data stream. The Integration Service creates an SSH2 transport layer that enables a secure connection and access to the files on an SFTP server.
- To run a session using an FTP connection for an SFTP server that requires public key authentication, the public key and private key files must be accessible on nodes where the session will run.
Attribute | Description |
Remote Filename | The remote file name for the source or target. Indirect source file name to be entered, in case of indirect source file is sent. Use 7-bit ASCII characters for the file name. The session fails if it encounters a remote file name with Unicode characters. If the path name is provided with the source file name, the Integration Service ignores the path entered in the Default Remote Directory field. The session will fail if the File name with path is provided with single or double quotation marks. |
Is Staged | Stages the source or target file on the Integration Service. Default is “Not staged”. |
Is Transfer Mode ASCII | Changes the transfer mode. When enabled, the Integration Service uses ASCII transfer mode. - Use ASCII mode when transferring files on Windows machines to ensure that the end of line character is translated properly in text files. When disabled, the Integration Service uses Binary Transfer mode. - Use Binary Transfer mode when transferring files on UNIX machines. Default is disabled. |
Tuesday 1 March 2011
Informatica – User Defined Functions
Tuesday 25 January 2011
Informatica Pushdown Optimization
What is Pushdown Optimization and things to consider
How does Pushdown Optimization (PO) Works?
Few Benefits in using PO
- There is no memory or disk space required to manage the cache in the Informatica server for Aggregator, Lookup, Sorter and Joiner Transformation, as the transformation logic is pushed to database.
- SQL Generated by Informatica Integration service can be viewed before running the session through Optimizer viewer, making easier to debug.
- When inserting into Targets, Integration Service do row by row processing using bind variable (only soft parse – only processing time, no parsing time). But In case of Pushdown Optimization, the statement will be executed once.
Things to note when using PO
- Nulls treated as the highest or lowest value: While sorting the data, the Integration Service can treat null values as lowest, but database treats null values as the highest value in the sort order.
- SYSDATE built-in variable: Built-in Variable SYSDATE in the Integration Service returns the current date and time for the node running the service process. However, in the database, the SYSDATE returns the current date and time for the machine hosting the database. If the time zone of the machine hosting the database is not the same as the time zone of the machine running the Integration Service process, the results can vary.
- Date Conversion: The Integration Service converts all dates before pushing transformations to the database and if the format is not supported by the database, the session fails.
- Logging: When the Integration Service pushes transformation logic to the database, it cannot trace all the events that occur inside the database server. The statistics the Integration Service can trace depend on the type of pushdown optimization. When the Integration Service runs a session configured for full pushdown optimization and an error occurs, the database handles the errors. When the database handles errors, the Integration Service does not write reject rows to the reject file.
Monday 3 January 2011
Informatica Performance Improvement Tips
- Use Source Qualifier if the Source tables reside in the same schema
- Make use of Source Qualifer “Filter” Properties if the Source type is Relational.
- If the subsequent sessions are doing lookup on the same table, use persistent cache in the first session. Data remains in the Cache and available for the subsequent session for usage.
- Use flags as integer, as the integer comparison is faster than the string comparison.
- Use tables with lesser number of records as master table for joins.
- While reading from Flat files, define the appropriate data type instead of reading as String and converting.
- Have all Ports that are required connected to Subsequent Transformations else check whether we can remove these ports
- Suppress ORDER BY using the ‘–‘ at the end of the query in Lookup Transformations
- Minimize the number of Update strategies.
- Group by simple columns in transformations like Aggregate, Source Qualifier
- Use Router transformation in place of multiple Filter transformations.
- Turn off the Verbose Logging while moving the mappings to UAT/Production environment.
- For large volume of data drop index before loading and recreate indexes after load.
- For large of volume of records Use Bulk load Increase the commit interval to a higher value large volume of data
- Set ‘Commit on Target’ in the sessions
Thursday 23 December 2010
Leveraging Metadata in Informatica Workflow-Session/Analysis
Tuesday 26 October 2010
Impact Analysis on Source & Target Definition Changes
Changes to Source and Target definition will impact the current state of the Informatica mapping and this article list the possible changes at Source and the Target with impact.
Updating Source Definitions:When we update a source definition, the Designer propagates the changes to all mappings using that source. Some changes to source definitions can invalidate mappings.
Below table describes how the mappings get impacted when the source definition is edited:
Modification | Result of the source after modifying the source definition |
Add a column. | Mappings are not invalidated. |
Change a column Data type. | Mappings may be invalidated. If the column is connected to an input port that uses a Data type incompatible with the new one, the mapping is invalidated. |
Change a column name. | Mapping may be invalidated. If you change the column name for a column you just added, the mapping remains valid. If you change the column name for an existing column, the mapping is invalidated. |
Delete a column. | Mappings can be invalidated if the mapping uses values from the deleted column. |
Adding a new column in the existing source definition:
- When we add a new column to a source in the Source Analyzer, all mappings using the source definition remain valid.
- However, when we add a new column and change some of its properties, the Designer invalidates mappings using the source definition.
- We can change the following properties for a newly added source column without invalidating a mapping: 1. Name
2. Data type
3. Format
4. Usage
5. Redefines
6. Occurs
7. Key type
Updating Target Definitions:
When we change a target definition, the Designer propagates the changes to any mapping using that target. Some changes to target definitions can invalidate mappings.
The following table describes how the mappings get impacted when we edit target definitions:
Modification | Result of the source after modifying the target definition |
Add a column. | Mapping not invalidated. |
Change a column Data type. | Mapping may be invalidated. If the column is connected to an input port that uses a Data type that is incompatible with the new one (for example, Decimal to Date), the mapping is invalid. |
Change a column name. | Mapping may be invalidated. If you change the column name for a column you just added, the mapping remains valid. If you change the column name for an existing column, the mapping is invalidated. |
Delete a column. | Mapping may be invalidated if the mapping uses values from the deleted column. |
Change the target definition type. | Mapping not invalidated. |
Adding a new column in the existing target definition:
- When we add a new column to a target in the Target Designer, all mappings using the target definition remain valid.
- However, when you add a new column and change some of its properties, the Designer invalidates mappings using the target definition.
- We can change the following properties for a newly added target column without invalidating a mapping:
2. Data type
3. Format
If the changes invalidate the mapping, validate the mapping and any session using the mapping. We can validate objects from the Query Results or View Dependencies window or from the Repository Navigator. We can validate multiple objects from these locations without opening them in the workspace. If we cannot validate the mapping or session from one of these locations, open the object in the workspace and edit it.
Re-importing a Relational Target Definition:
If a target table changes, such as when we change a column data type, we can edit the definition or we can re-import the target definition. When we re-import the target, we can either replace the existing target definition or rename the new target definition to avoid a naming conflict with the existing target definition.
To re-import a target definition:
- In the Target Designer, follow the same steps to import the target definition, and select the Target to import. The Designer notifies us that a target definition with that name already exists in the repository. If we have multiple tables to import and replace, select apply to All Tables.
- Click Rename, Replace, Skip, or Compare.
- If we click Rename, enter the name of the target definition and click OK.
- If we have a relational target definition and click Replace, specify whether we want to retain primary key-foreign key information and target descriptions
Option | Description |
Apply to all Tables | Select this option to apply rename, replaces, or skips all tables in the folder. |
Retain User-Defined PK-FK Relationships | Select this option to keep the primary key-foreign key relationships in the target definition being replaced. This option is disabled when the target definition is non-relational. |
Retain User-Defined Descriptions | Select this option to retain the target description and column and port descriptions of the target definition being replaced. |
Thursday 14 October 2010
Output Files in Informatica
The Integration Service process generates output files when we run workflows and sessions. By default, the Integration Service logs status and error messages to log event files.
Log event files are binary files that the Log Manager uses to display log events. When we run each session, the Integration Service also creates a reject file. Depending on transformation cache settings and target types, the Integration Service may create additional files as well.
The Integration Service creates the following output files:
Session Details/logs:
- When we run a session, the Integration service creates session log file with the load statistics/table names/Error information/threads created etc based on the tracing level that have set in the session properties.
- We can monitor session details in the session run properties while session running/failed/succeeded.
- Workflow log is available in Workflow Monitor.
- The Integration Service process creates a workflow log for each workflow it runs.
- It writes information in the workflow log such as
- Initialization of processes,
- Workflow task run information,
- Errors encountered and
- Workflows run summary.
- The Integration Service can also be configured to suppress writing messages to the workflow log file.
- As with Integration Service logs and session logs, the Integration Service process enters a code number into the workflow log file message along with message text.
- The Integration Service process generates performance details for session runs.
- Through the performance details file we can determine where session performance can be improved.
- Performance details provide transformation-by-transformation information on the flow of data through the session.
- By default, the Integration Service process creates a reject file for each target in the session. The reject file contains rows of data that the writer does not write to targets.
- The writer may reject a row in the following circumstances:
- It is flagged for reject by an Update Strategy or Custom transformation.
- It violates a database constraint such as primary key constraint
- A field in the row was truncated or overflowed
- The target database is configured to reject truncated or overflowed data.
- Open Session – Select any of the target View the options
- Reject File directory.
- Reject file name.
- If you enable row error logging, the Integration Service process does not create a reject file.
- When we configure a session, we can choose to log row errors in a central location.
- When a row error occurs, the Integration Service process logs error information that allows to determine the cause and source of the error.
- The Integration Service process logs information such as source name, row ID, current row data, transformation, timestamp, error code, error message, repository name, folder name, session name, and mapping information.
- we enable flat file logging, by default, the Integration Service process saves the file in the directory entered for the service process variable $PMBadFileDir in the Workflow Manager.
- The Integration Service process creates recovery tables on the target database system when it runs a session enabled for recovery.
- When you run a session in recovery mode, the Integration Service process uses information in the recovery tables to complete the session.
- When the Integration Service process performs recovery, it restores the state of operations to recover the workflow from the point of interruption.
- The workflow state of operations includes information such as active service requests, completed and running status, workflow variable values, running workflows and sessions, and workflow schedules.
- When we run a session that uses an external loader, the Integration Service process creates a control file and a target flat file.
- The control file contains information about the target flat file such as data format and loading instructions for the external loader.
- The control file has an extension of .ctl. The Integration Service process creates the control file and the target flat file in the Integration Service variable directory, $PMTargetFileDir, by default.
- We can compose and send email messages by creating an Email task in the Workflow Designer or Task Developer and the Email task can be placed in a workflow, or can be associated it with a session.
- The Email task allows to automatically communicate information about a workflow or session run to designated recipients.
- Email tasks in the workflow send email depending on the conditional links connected to the task. For post-session email, we can create two different messages, one to be sent if the session completes successfully, the other if the session fails.
- We can also use variables to generate information about the session name, status, and total rows loaded.
- If we use a flat file as a target, we can configure the Integration Service to create an indicator file for target row type information.
- For each target row, the indicator file contains a number to indicate whether the row was marked for insert, update, delete, or reject.
- The Integration Service process names this file target_name.ind and stores it in the Integration Service variable directory, $PMTargetFileDir, by default.
- If the session writes to a target file, the Integration Service process creates the target file based on a file target definition.
- By default, the Integration Service process names the target file based on the target definition name.
- If a mapping contains multiple instances of the same target, the Integration Service process names the target files based on the target instance name.
- The Integration Service process creates this file in the Integration Service variable directory, $PMTargetFileDir, by default.
- When the Integration Service process creates memory cache, it also
creates cache files. The Integration Service process creates cache files
for the following mapping objects:
- Aggregator transformation
- Joiner transformation
- Rank transformation
- Lookup transformation
- Sorter transformation
- XML target
- By default, the DTM creates the index and data files for Aggregator, Rank, Joiner, and Lookup transformations and XML targets in the directory configured for the $PMCacheDir service process variable.