Find your content:

Search form

You are here

Workflow rule causing trigger to fire twice

 
Share

I need to be able to detect that the trigger execution is happening because of a workflow field update. There is a suggestion in the cookbook:

Controlling Recursive Triggers

But this solution is not properly bulkified. If my batch of records is split up (200, 100, 50 whatever the server decides) the 2nd to nth batches are not processed correctly because the flag has already been set.

The second part of the problem is: This is part of a managed package. I have no control of the customer's workflow rules, supporting and configuring a flag/last changed field on a record is not realistic, our company doesn't have the support resources to be walking each customer through the configuration.

Has anyone been able to solve this problem for large batches of records?

Edit (In Response to Calebs Answer)

Separate trigger batches do not run in separate contexts

trigger AccountTriggerContext on Account (before insert) {
    System.debug('counter='+TriggerContextCounter.getCounter());
}

.

global without sharing class TriggerContextCounter {

    private static Integer counter = 0;
    
    public static Integer getCounter() {
        Integer value = counter;
        counter++;
        return value;
    }

    private static testMethod void testTriggerContexts() {
        List<Account> accounts = new List<Account>();
        for(Integer i = 0; i < 250; i++) {
            accounts.add(new Account(Name='testTriggerContexts: ' + i));
        }
        insert accounts;
    }
}

Unit test debug messages

USER_DEBUG|[2]|DEBUG|counter=0

USER_DEBUG|[2]|DEBUG|counter=1

Both counters should have been 0 if the batches were running in separate contexts

Edit Further explanation of background for @Ralph

This is a generalized explanation, but when certain conditions are met, some date fields on other objects are updated. The trigger runs and updates the dates.

If a workflow rule causes a field update on the object, the same trigger is run again and the dates on the other objects are updated for a second time. And because this is an increment operation the dates on the other objects are off by a factor of 2.


Attribution to: Daniel Blackhall

Possible Suggestion/Solution #1

While I saw the solution described (store IDs in a map or set), I didn't see any code. In the event you want something that you can cut and paste. Disclaimer: I have tested this against single record updates and it works fine (prevents my logic from running on the second trigger execution). I have not tested it against batch updates.

public class TriggerRunOnce {
    private static Set <Id> idSet = new Set <Id>();

    // has this Id been processed? 
    public static boolean isAlreadyDone(Id objectId) {
        return (idSet.contains(objectId));
    }

    // set that this Id has been processed.
    public static void setAlreadyDone(Id objectId) {
        idSet.add(objectId);
    }

    // empty set if we need to for some reason. 
    public static void resetAlreadyDone() {
        idSet.clear();
    }

}

and your trigger:

if (!TriggerRunOnce.isAlreadyDone(Obj.Id)) {
    // do your processing
    TriggerRunOnce.setAlreadyDone(Obj.Id);
}

Attribution to: Rajat Paharia

Possible Suggestion/Solution #2

If you're only looking to ensure you process the same object once, i.e. create a child object when a record reaches a certain status, tracking the set of ids processed will be sufficient. However, if you're tracking a change in a field, which in theory could happen twice if the workflow has updated that field you'll want to work around some unexpected behavior with trigger.old.

You might expect that after the first pass through the trigger trigger.new would then become trigger.old, but it'll actually contain the same values as started with the trigger. Here's an example for tracking a status change through multiple trigger batches.

/* 
Track how many times the stage has changed for an opportunity
*/
public class TriggerLogic {

  private static Map<Id, Opportunity> oldMapFix = new Map<Id, Opportunity>();
  private static Map<Id, Opportunity> oldMap;
  private static List<Opportunity> newList;

  public TriggerLogic(List<Opportunity> newList, Map<Id, Opportunity> oldMap) {
    this.newList = newList;
    this.oldMap = oldMap;
  }

  // NB: assuming we're called from a before update trigger
  public static countStatusChanges() {

    for(Opportunity newOppty : newList) {

      Opportunity oldOppty = (oldMapFix.contains(newOppty.id))
        ? oldMapFix.get(newOppty.id)
        : oldMap.get(newOppty.id);

      if(newOppty.stageName != oldOppty.stageName) {
        newOppty.stage_change_count__c = (newOppty.stage_change_count__c == null)
          ? 1 : newOppty.stage_change_count__c + 1;

        // stash correct oppty with new status so we don't
        // double count a status change if we have multiple 
        // trigger executions
        oldMapFix.put(newOppty.id, newOppty);

      }
    }
  }
}

This captures the following scenarios correctly. First

  1. Opportunity stage changed
  2. Triggers fire, stage_count__c incremented
  3. Workflows fire and field updates (not to status) triggers second execution
  4. Trigger can correctly recognize stage has not changed again

And second

  1. Opportunity stage changed
  2. Trigger first, stage_count__c incremented
  3. Workflows fire and field updates change the status again
  4. Trigger correctly increments stage_count__c a second time

Attribution to: Ralph Callaway

Possible Suggestion/Solution #3

I have a trigger that creates new child records when a field's value is changed and ended up with a similar situation where I ended up with two child records per field change when workflow was run. In the end I created a Map to record what records were process (by Id, key of the map) and the new value of the field when I created the child record. I then checked this map to make sure I wasn't re-processing the same field update.

It's a lot of extra complexity and code and my use case is simpler than a lot. I'm not quite sure why salesforce things running triggers twice for the same transaction when workflow is involved is a good thing.


Attribution to: ca_peterson

Possible Suggestion/Solution #4

The example in the cookbook should work regardless of bulk size. Create a helper class, here's what I typically use:

global class SingleExecution {

private static boolean blnAlreadyDone = false;

public static boolean hasAlreadyDone(){ 
    return blnAlreadyDone;
}

public static void setAlreadyDone() {
    blnAlreadyDone = true;  
}

public static void forceResetAlreadyDone() {
    blnAlreadyDone = false; 
}

static testMethod void testSingleExecution() {
    //Hasn't already run
    System.assertEquals(false,SingleExecution.hasAlreadyDone());

    SingleExecution.setAlreadyDone();
    //Has just been run
    System.assertEquals(true,SingleExecution.hasAlreadyDone());

    SingleExecution.forceResetAlreadyDone();
    //Has just been reset
    System.assertEquals(false,SingleExecution.hasAlreadyDone());
}   
}

then in your trigger you can do

trigger AccountTrigger on Account (before update) {
    if(SingleExecution.hasAlreadyDone()) return;
    //Else
    SingleExecution.setAlreadyDone();
    //Do your stuff
}

Even though you may data load say 350 accounts which would cause this trigger to fire twice...once for the fist 200 records and then again for the next 150 records each time the trigger fires it has it's own context and so the SingleExecution flag is re-set in each of those separate contexts.

You can also get more fancy with your SingleExecution and allow it to be used across various triggers/classes say for preventing @future from being called more than once because instead of a single boolean, it lets you pass a string that with the name of your class. Because of context you typically don't need this, however if you had two triggers that might call each other and you want them BOTH to fire once, then you'd need to differentiate between them somehow.

Hope this helps...

global without sharing class SingleExecution {

private static Map<String,Boolean> singletonMap;

global static Boolean hasAlreadyExecuted(String ClassNameOrExecutionName){  
    if(singletonMap != null) {
        Boolean alreadyExecuted = singletonMap.get(ClassNameOrExecutionName);
        if(alreadyExecuted != null) {
            return alreadyExecuted;
        }           
    }
    //By default return false
    return false;
}

global static void setAlreadyExecuted(String ClassNameOrExecutionName) {
    if(singletonMap == null) {
        singletonMap = new Map<String,Boolean>();
    }
    singletonMap.put(ClassNameOrExecutionName,true);
}

static testMethod void testSingleExecution() {
    //Hasn't already run
    System.assertEquals(false,SingleExecution.hasAlreadyExecuted('testSingleExecution'));

    SingleExecution.setAlreadyExecuted('testSingleExecution');
    //Has just been run
    System.assertEquals(true,SingleExecution.hasAlreadyExecuted('testSingleExecution'));
}

}

Attribution to: caleb
This content is remixed from stackoverflow or stackexchange. Please visit https://salesforce.stackexchange.com/questions/109

My Block Status

My Block Content