IBM App Connect (Cast Iron) function to add/subtract minutes

Posted by jack on October 21, 2016

One common requirement in integration is to set threshold dates based on a configuration setting for an offset from an existing dateTime value – either the current dateTime or one read in data. Unlike other languages and tools, neither App Connect / Cast Iron nor JavaScript have convenient functions to for example subtract 10 minutes from a given dateTime. I am sure many JavaScript developers have their own functions to do this, but App Connect only supports a sub-set of the JavaScript language (IBM are a bit coy about this!) but on the other hand it does provide date format functions we can take advantage of to reduce the amount of code lines required.

Here then is my custom JavaScript function code to create such a function – this one works in minutes, but you could extend to seconds or even milliseconds if you require….

I call this function addMinutesToDateTime_yyyyDDDHHmm

App Connect developers (&others) will recognise the suffix as a format string for date time entries. I chose that format because it converts to/from a string with a fixed length, and avoids handling months – DDD in App connect means “The day number in the year expressed as 3 digits, with 001 being 1st Jan” . NOTE – HH is the 24 hour clock – do not use hh wich is 12 hour clock.

The inputs to the function are an integer (positive or negative)  that I named  minsToAdd_mayBeNegative, and a date time represented as a string called inputDateTime_yyyyDDDHHmm – as the name suggests, to convert an actual dateTime for input to the function use a standard Formal Date String function with the format entry….. well I think you know by now.

The output of the addMinutesToDateTime_yyyyDDDHHmm function then is passed to the Read Date String function – yes, with the same format string.

Here is the code within my custom function. Note I am not bothered about leap years, so possibly if you use over a period of more than 366 days you will be a day out…..if anyone wants to spend time elaborating to handle leap years, feel free.

var returnValue;
var outputYear =Number( inputDateTime_yyyyDDDHHmm.substr(0,4));
var outputDays =Number( inputDateTime_yyyyDDDHHmm.substr(4,3));
var outputHrs =Number( inputDateTime_yyyyDDDHHmm.substr(7,2));
var inputMins =Number( inputDateTime_yyyyDDDHHmm.substr(9,2));
var outputMins = inputMins + minsToAdd_mayBeNegative;
while(outputMins < 0){
outputMins =outputMins + 60;
outputHrs = outputHrs -1;
}
while(outputMins >=60){
outputMins =outputMins – 60;
outputHrs = outputHrs +1;
}
while(outputHrs<0){
outputHrs =outputHrs + 24;
outputDays = outputDays -1;
}
while(outputHrs>=24){
outputHrs =outputHrs – 24;
outputDays = outputDays +1;
}
while(outputDays<=0){
outputDays =outputDays +365;
outputYear = outputYear -1;
}
while(outputDays>366){
outputDays =outputDays -365;
outputYear = outputYear +1;
}
var outputDaysString=outputDays.toString();
while(outputDaysString.length <3){
outputDaysString = ‘0’ + outputDaysString;
}
var outputHrsString=outputHrs.toString();
while(outputHrsString.length <2){
outputHrsString = ‘0’ + outputHrsString;
}
var outputMinsString=outputMins.toString();
while(outputMinsString.length <2){
outputMinsString = ‘0’ + outputMinsString;
}

returnValue = outputYear + outputDaysString + +outputHrsString + outputMinsString;

return returnValue;

Comments Off on IBM App Connect (Cast Iron) function to add/subtract minutes
Categories: Cast Iron

Ensuring numeric input to Salesforce

Posted by jack on September 3, 2016

A useful javascript snippet I used in a Cast Iron custom function to check any string is a number string. This is useful where the input is a string format user input that might includes invalid characters and the Salesforce entry is either a numeric field type or has validation that causes non-integer input to be rejected.  The function returns a zero if the input is not a valid integer string, to avoid errors inputting a null into a number field in salesforce – but in other circumstances it could return an empty string as indicated.

var returnValue;
var regex = new RegExp(‘^[0-9]$’);
if(regex.test(inputString)==true)returnValue=inputString;
else returnValue=’0′;  //alternatively  else returnValue =”;

return returnValue;

Comments Off on Ensuring numeric input to Salesforce
Categories: Cast Iron,Salesforce

Dynamic XPath filtering in Cast Iron

Posted by jack on March 7, 2016

If you want to select a sub-set of records from an XML variable with recurring child records using a static filter, then use an XPath filter  in a Map Variables activity

  • Right-Click on the root of the recurring node in the destination
  •  Enter the XPath predicate like variableName=’literal’

So much so straightforward – but what if you want to filter on a value you calculated earlier in the orchestration rather than a fixed value? Well, then you can reference a variable in the XPath predicate as

variableName=bpws:getVariableData(‘myFilterVariable’)

where myFilterVariable is the name of a primitive variable.

 

 

Comments Off on Dynamic XPath filtering in Cast Iron
Categories: Cast Iron

Things to do with the Map Variables Activity

Posted by jack on July 12, 2015

The map Variables activity superficially looks like the simplest of all Cast Iron activities – there are no end-points to connect or templates to select, just inputs and outputs. Yet it is one of the most versatile of Cast Iron activities. Here are a few things I use this activity for

  • Filtering using XPath
  • Counting records in an XML variable
  • Accumulating rows in an output variable when they meet specified criteria
  • Converting a ragged hierarchy to a flat record (for example from a cross-object salesforce query)
  • Merging values from one variable into matched values of another (payload) variable.

Filtering Using XPath like other Cast Iron features, is not immediately obvious in the Studio user interface – furthermore there are important limitations in how it works.

To use XPath filtering, right-click on the root of the recurring element to filter into and select Filter Recurring Nodes from the pop-up menu. The XPath Predicate is a statement like Status = ‘Cast Iron to Convert’  – only records meeting this criteria will be mapped. In this case, Status is a string value – XPath also works fine for Booleans  – eg boolEntry = true()  , You can also use for numbers eg numberValue = 0.5  – however, take care if you have nulls when using XPath filtering, particularly of dates and numbers.

Cast Iron XPath filtering does NOT work with data as formatted out of a Salesforce Query – the structure of the output variable with multiple levels including object, is not catered for in the Map Variable XPath filter, which basically only handles simple XML variables with a flat single level data structure. To filter data extracted from Salesforce therefore you must map the data to such a variable first – I create my XSD using XML Spy Pro but you can use any tool that suits – just make sure that the resulting variable has one level and does not include the object level. Here is an example of such an .xsd:

<?xml version=”1.0″ encoding=”UTF-8″?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema”>
<xsd:element name=”users” type=”user”/>
<xsd:complexType name=”user”>
<xsd:sequence>
<xsd:element name=”rows” type=”row” minOccurs=”0″ maxOccurs=”unbounded”/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name=”row”>
<xsd:sequence>
<xsd:element name=”userName” type=”xsd:string”/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>

You can filter from one input variable to multiple output variables in one Map Variables activity, using a different XPath extression for each – but if you do this do not filter from the variable to itself but use entirely new variables on the output.

Accumulating rows in an output variable when they meet specified criteria Sometimes I have to go through this process

  1. Pick up some records
  2. Check & classify each one
  3. Accumulate into groups of similar records for further processing

Accumulating records uses the map variables activity. Both recordToAdd and similarRecords are input variables in this case, with just similarRecords as the output. To add in recordToAdd, before mapping  SimilarRecords, right-click on the root of the recurring element in the output, and select Expand Occurrences – them map the input similarRecords  to the many records element of the output, and recordToAdd to the single additional occurrence.

You must ensure that  similarRecords is initialised when used in this way to accumulate records.

Merging values from one variable into matched values of another (payload) variable – an approach I worked out to avoid having to write XSLT  templates all the time – it is much quicker to code, easier for others to understand and does not seem to impose much of a performance hit.

Here is a typical problem. I have updated versions of records coming into the data warehouse and I want to update the existing values in Salesforce to match, but have not set the record identifier as an external key in Salesforce. Of course, it would be much easier if I had set that key, but there are circumstances where that is not possible. So – what I want to do is

  1. Query the data warehouse to extract the up-to-date records
  2. Query Salesforce to get existing matched values
  3. Match the 2 sets together, putting the salesforce Id into the matched up-to-date records so the7y can be used to update Salesforce
  4. Identify records not already in Salesforce, and insert those.

One approach is to use 2 for each loops for each up-to-date record (loop1), find the matching  salesforce record (loop2) – fine if you have 10 records (max 100 loops), not so good when you have 25,000 (max 625 million loops) – you might wait a long time….

My solution uses one loop, one custom javascript function and one map variables activity to do the same  job. The variable with the values to merge in (in our example the Salesforce records) is looped, and the payload variable amended to merge in the required values (in our example the Salesforce Ids)

The javascript custom function mergeMatchedValue has these input variables – payloadKey, payloadValue, matchKey, matchValue. The javascript code in the function is

var returnValue;

returnValue=payloadValue;

if(payloadKey == matchKey) returnValue = matchValue;

return returnValue;

The for each loop variable salesforceRecord and the payload variable payload form the input to a map variables activity, and payload forms the output. Drag to link the root of payload, then un-map the mapping for the value for (salesforce) Id.

Now pull the mergeMatchedValue function into the activity, and link the business key and (salesforce) Id from the payload variable to function inputs payloadKey and payloadValue. Then map the business key and (salesforce) Id from the loop variable salesforceRecord to the function inputs matchKey and matchValue. Map the output from the function to the output field (salesforce) Id in Payload.

Voila! when run the salesforce Ids get merged into the payload variable. You can then use map Variables activity with XPATH filtering downstream of the loop to filter the Payload into two new variables payloadWithExistingSalesforceRecords and payloadNoExistingSalesforceRecords

The equivalent XSLT template runs to about 80 lines of not easy to understand code.

Comments Off on Things to do with the Map Variables Activity
Categories: Cast Iron

Polling in the Cloud and reliability

Posted by jack on July 11, 2015

Salesforce is a very reliable SAAS application, but that is not the case for all cloud based software – and even Salesforce is subject to occasional non-availability or time-out failures. so when designing integration allowance must be made for unexpected events and failures.

Salesforce applies API activity limits, so I want to avoid running queries too frequently, and yet still enable users to get their records updated from the data warehouse via Cast Iron in close to real time. An example I just completed involved the user inputting a reference number that creates a stub record, and the integration looking in the data warehouse for a matching record and updating the stub record with the actual values. It is possible for the user to input an invalid value (in which case a matching record is never found) or to input a reference that has not got into the date warehouse yet due to processing delays elsewhere.

The obvious approach to processing the user input is to use polling – so that a change in Salesforce is picked up at a short polling interval. This is fine provided service availability for all the components ( Salesforce, Cast Iron and the client’s data warehouse) are 100% available whenever users make the triggering change in Salesforce, and the matching record does already exist in the data warehouse  – but in practice that may not be the case, and we do have failures – one way to pick up the missed synchronisations is using a scheduled query as a backup.

So I generally couple a polling triggered orchestration with a twin “tickle” orchestration – all this does is look for records that  remain in the state that should have triggered the processing, and update them without changing any business values to cause the polling to pick them up again – that way I avoid having to replicate the business logic in two orchestrations.

Of course, to make this work  requires that the standard processing changes the processed record in a way we can detect, and also that there are some limits – generally I use 2 dateTime entries in salesforce to limit the records to reprocess 1. A “most recently last tried” limit to avoid retrying every unprocessed record  on every schedule – and 2. An “oldest record to process” limit to eliminate older records that have never processed correctly so we do eventually give up on them.

 

 

 

 

Comments Off on Polling in the Cloud and reliability
Categories: Cast Iron,Salesforce

Cast Iron – returning SQL query results that match multiple key values

Posted by jack on July 7, 2015

This post concerns an annoying limitation of the Database Query activities when compared to the Salesforce equivalent Query Objects – namely that the latter allows the input of a multi-row selection parameter using a SOQL IN clause, whilst the Database Query SQL activity does not. This is particularly obvious when (as in my case) your task is to take a bunch of Salesforce records and check them against the database for updates.

Other Cast Iron users refer to the data Quality  Lookup activity for a work-round, but I have a better approach using Stored procedures. That solution is to have a stored procedure that accepts the input as a comma separated list of values in single string, then internally converts this input to the key values to match in the query, and use a database Call Procedure activity to collect the matching rows.

To created the input parameter string for the Call Procedure input I used a For Each loop upstream, accumulating the key values from the Salesforce data (in my case I actually used a SPLIT of that data to handle larger data volumes). Remember to remove the initial comma (or ending one, depending how you build) from the string after the for each loop.

Derivation of a suitable stored procedure is documented at http://www.sommarskog.se/arrays-in-sql-2005.html#XML  – this is specific to Microsoft SQL Server 2005 (which is what I am using) – for 2000 there is a (slower, worse) alternative and for later versions more elegant and quicker options. No doubt there are similar approaches for Oracle & DB2.

Since I already have a view used in an earlier Cast Iron implementation where each individual salesforce record was matched using a For Each loop to one records each time (ie very slow), all I had to do was include that view in my stored procedure and return the matched rows (see this post on use of Copy Of function to get the data from a stored procedure)

The net result of this is now 2 datasets for comparison:

  • The Salesforce Objects that require checking for update
  • The matching database values to use for that check and provide updates.

My next post will concern approaches to the comparison of two datasets in Cast Iron; as usual, I will avoid the use of more than simple javascript functions in the solution, making the most of built in Cast Iron capabilities.

 

 

 

 

 

Comments Off on Cast Iron – returning SQL query results that match multiple key values
Categories: Cast Iron,SQL Server

Cast Iron – using Database Call Procedure activity to collect rows

Posted by jack on June 23, 2015

I was going to write a piece about this but found this excellent IBM  DeveloperWorks guide so won’t bother….. just be aware you are unlikely to be able to work it out just by tinkering!

My comments

  • If you want to avoid a hard time with XPath filtering, map to a simple single-level variable downstream.
  • I use Altova XML Spy for all XSD design work – there are other brands, but using a simple text editor might not be the best use of your time.

 

Comments Off on Cast Iron – using Database Call Procedure activity to collect rows
Categories: Cast Iron

Accessing SOQL Aggregated Data in Cast Iron orchestration

Posted by jack on June 23, 2015

Salesforce SOQL aggregate queries allow you to summarise across objects very simply using syntax like:

Select COUNT(Id) contactCount, MAX(lastModifiedDate) lastModifiedDateContact,

Account.BillingCity AccountCity

FROM Contact GROUP BY Account.BillingCity  HAVING COUNT(Id) > 5

In Cast Iron Studio v7.0.0.2 using this query in a Salesforce Query Objects activity will return results as an AggregateResult type object – but will not let this be mapped to anything else……

Investigating this issue I discovered two blocks in this type of query

  1. Using aliases in the SOQL query
  2. Using fields from multiple Salesforce objects

Removing the aliases is simple:-

Select COUNT(Id) ,MAX(lastModifiedDate) , Account.BillingCity

FROM Contact GROUP BY Account.BillingCity HAVING COUNT(Id) > 5

Now the output has aggregated data elements named expr0, expr1, expr2…. and these will map – but Account.BillingCity will not. To handle this, I put a calculated contact field Account_City__c = Account.BillingCity in the Salesforce Org, then revise the query to be

Select COUNT(Id) ,MAX(lastModifiedDate) , MAX(Account_City__c)

FROM Contact GROUP BY AccountId HAVING COUNT(Id) > 5

Now all the required elements from the aggregate query map fine…..I can use a single tier XSD with these entries (mapped values in brackets) to collect the output.

  • contactCount (expr0)
  • lastContactModDate (expr1)
  • accountCity (expr2)

As a general point, Cast Iron does not handle multi-level xml schemas well, so whenever possible I decompose to a flat structure and then all the XPath filtering and other built-in features of Cast Iron work as expected.

Comments Off on Accessing SOQL Aggregated Data in Cast Iron orchestration
Categories: Cast Iron

Salesforce.com – so much more than just a CRM

Posted by jack on April 26, 2013

One issue that deters organisations from adopting Salesforce is the higher cost of licences – if you do some simple arithmetic you quickly end up with a formidable number – especially when comparing Salesforce Enterprise with much of the competition. And the saas model means after spending all that money you don’t even own a copy of the software!

There are a number of reasons why this kind of comparison can be a bit misleading – for example the phrase “perpetual licence ” may legally mean effective ownership, but in practical terms unless you have support and maintenance in place (and the cost of this is normally 20 – 30% of the original box price per year)  then the value of what you purchased quickly decays – even with s&m it is easy to get trapped in  old versions of software due to the costs of upgrade – one client I worked for had a highly configured Pivotal installation that ran their business and could not upgrade it. They were running CRM without email integration!

Like much of IT, software packages of any type do not have as long a life as other assets – which is why accountants tend to write them off over many fewer years than they take for fixed plant for example.  With saas like Salesforce (other brands are available) you are always on the latest version for as long as you subscribe.

But the accounting cost of software is not the theme I wanted to address now, but how Salesforce can quickly become a key component well outside the core CRM use when integrated to other systems, and how that in turn can enrich the CRM. My example is a recent development for a client selling hotel rooms, but can apply to many other businesses.

Hotel rooms can be booked using a wide variety of rate codes; some rates can include or exclude items like breakfast or late check-out, can  be exclusive to specific consumers or travel agency groups, or special offers (effectively sales prices). The number and variety of rates is constantly changing as all those concerned in the business seek new ways to grow their business.

As the use of these rates changes, so do the information reporting requirements for monitoring performance. With several thousand rates in use, rather than individually select every rate required in every different report, the client’s data warehouse has evolved a set of reporting flags and descriptions to allow rapid production and one-point maintenance of such reports.

But the data warehouse has no user front end, so each time a new rate is required, or an existing one is moved into or out of a selected group of rates for a specific requirement, the poor data warehouse manager must be asked to adjust the values in this reference data –  and I have an aversion to being included in business processes and repeatedly doing simple dull SQL updates. Other users were asking the report writer to manually include/exclude rates to get the reports they want quickly – neglecting update of the reference data and causing different reports intended to be comparable to end up including different sets of rates.

In essence, a single table of reference data can be represented as a single object in Salesforce enterprise – so I could rapidly design a new object based on the data fields in the SQL table, using entirely drag & drop customisation. The natural candidate for the Salesforce object Name is the rate code itself (as that is the one fixed value in the table) but Salesforce does not allow imposition of the Unique constraint on object  Names, so I did have to apply a trigger to prevent duplicates.Here is the code for my trigger (Rate_Code_Description__c is the name of the custom object:-

trigger rateCodeDuplicatePreventer on Rate_Code_Description__c (before insert, before update) {

Map<String,Rate_Code_Description__c> rateCodeMap = new Map<String, Rate_Code_Description__c>();
    for (Rate_Code_Description__c rateCode : System.Trigger.new) {
    
    // Make sure we do not treat a rate code that is not changed during an update as a duplicate
    if (( rateCode.Name != null) && (System.Trigger.isInsert || (rateCode.Name !=
    System.Trigger.oldMap.get(rateCode.Id).Name))){
    // Make sure another new rateCode isn’t also a duplicate
        if (rateCodeMap.containsKey(rateCode.Name)) {
            rateCode.Name.addError(‘Duplicate rate code in the same batch’);
        } else {
                rateCodeMap.put(rateCode.Name, rateCode);
           
        }
    }
   }
    //using a single query, finad all the ratecodes with the same name
    for (Rate_Code_Description__c rateCode : [SELECT Name from Rate_Code_Description__c WHERE Name IN :rateCodeMap.KeySet()]){
    Rate_Code_Description__c newRateCode = rateCodeMap.get(rateCode.Name);
    newRateCode.Name.addError(‘This rate code already in Salesforce’);
}
}

Standard salesforce drag & drop form design allowed me to group the various checkboxes into related sections to help users manage their own rates – in this client there is good data discipline and cross-functonal cooperation, so I have not chosen to apply varying field level security – but I could have done so with a little more time.

Integration between salesforce and the data warehouse was undertaken using Cast Iron – the integration is 2 way but not symmetrical, since with the exception of the rate code (object Name) itself, we wanted all the other entries to be mastered in Salesforce, even if originating from the data warehouse table.  And we need to ensure that users get confirmation that entries put in Salesforce have correctly updated the data warehouse table.

New rates can be input in the data warehouse – this happens automatically where a new rate code is in use in incoming reservation data but does not exist in the reference data table – in this event a stub record is created using globally applicable rules to set a few flags to sensible defaults and taking a description from the first incoming reservation record. This new rate then is synched by IBM Cast Iron using a Poll Inserted Rows database activity into Salesforce. Once in salesforce a workflow rule sends an email to a responsible member of staff (identity varies according to the value of rate code input) so that person can then correct the description and set flags to the appropriate values.

The Salesforce object is connected to a second Cast Iron orchestration using a Poll Updated Objects activity – this then send the updated values back to the data warehouse table. Finally, the response from the update activity is used to update the Salesforce object in a single date field to confirm to the user that all the integration has worked correctly and their input has updated the data warehouse table.

Total time taken to develop, test and implement this is less than two days, and the job of maintaining the rate related data is put firmly where it should be – with the business users.

Now, here is the gravy – now that we have all this rate code information in Salesforce I can leverage this to improve the CRM – providing users with information on customer, partner and supplier related activities based on both this rate code data and the other objects already in Salesforce. We’ve not done that part yet, so I will write about that another day.

Jack

 

 

 

 

 

 

Comments Off on Salesforce.com – so much more than just a CRM

Mapping Variables in Cast Iron

Posted by jack on February 3, 2013

The Map Variables activity in Cast Iron is used for many different purposes including:-

  • Mapping from an input schema to output schema. The most obvious use of the activity.
  • Counting records – using the count function to count the rows within an XML variable.
  • Filtering records using XPATH.
  • Merging in data from one variable to another using a custom javascript fiunction
  • Accumulating data in a loop.
  • including CDATA element in an XML variable.

Here are some techniques I have used for these duties:

XPATH filtering is a standard XML technology. To insert a filter expression in Cast Iron Studio, input the XML variable to be filtered on both sides of a Map Variables activity, and map the two instances together, input to output. Right-click on the occurrence label on the output (RH) side and select Filter Recurring Nodes from the pop-up menu.

Limitations on XPATH filtering  in Cast Iron (up to v6.1)    1. XPath only works on the Map Variables activity.  Other activities (for example the output to a Salesforce Upsert Objects activity) include the option to include a filter on the mapping, but in practice these do not work. 2. XPath only works on variables wth a flat structure. Again the interface allows you to input an XPATH expression in the mapping for multi-level variables, but it will not work. 3. The data structure on either side should be similar.  You can map between different input and output variables and include XPATH filters, but base them on the same XSD otherwise you may find they don’t work corrrectly.

Merging in data using a javascript function. Cast Iron offers several options for merging data from one variable into another. One option is to use an XSLT (XML stylesheet) – and often this is the best approach, although it does involve tricky code, and Cast Iron Studio has limited diagnostic support to help you identify and correct coding errrors.

I have a simpler approach using a simple javascript function, and this allows me to merge a single fixed value into a recordset.  To merge two recordsets, the recordset to be merged into the target recordset is cycled through in a For Each activity, and then each member of the merge recordset is mapped in turn.

Here is a typical javascript function

Name: mergeFieldXintoRset

Input parameters: RsetKey, RsetFieldX, MergeKey, MergeFieldX

javscript: 

var returnValue = RsetFieldX ; 

if (RsetKey == MergeKey) returnValue= MergeFieldX;

return returnValue;

This function will transmit the existing value unchanged unless the keys match, when it substitures the matched value for the merge data.

Accumulating data in a loop

To accumulate data in a recordset, the recordset variable is first mapped to itself. Then the variable(s) holding the values to be included in an additional record are added to the LHS of the Map Variables activity. Now click the RH variable on the row identifier (where the anti-clockwise arrow is displayed) and select Expand Occurences from the pop-up menu. This displays the Expand Occurences window with one additional occurence as the option – this is normally what you want. Then map the added variable(s) on the LHS to the  added occurence.

Limitations on use: I have not found a way to add an undefined number of additional values in a single activity – so if you need to do this cycle through the records to add in a For Each loop to add them in one at a time.

 

 

 

 

 

 

 

 

 

 

Cast Iron only implements filtering on flat (single level) xml variables. So this example works:-

 

 

 

 

Comments Off on Mapping Variables in Cast Iron
Categories: Cast Iron