Find your content:

Search form

You are here

Unit test SOQL limit setting up data model

 
Share

I have run in to this situation where the data model is very complex and since SF doesn't support mocks (e.g., Mockito) there are so many SOQL executions in just setting up the data model (triggers, queries to get formulas, etc.) that the limit is hit. It is not uncommon to have a traditional web site / application that has a few dozen tables and likewise it is possible to build applications on Force.com that use a few dozen Objects, so I don't see this as unreasonable or a bad design issue. Also, in production it would never be the case that the entire data model would be populated in one transaction.

I surround the code under test with Test.startTest()/stopTest() calls which separates the setup SOQL limits, but I still hit it.

What we've done is to create a TestUtil class that has a Boolean triggersActive flag that we can set to false and then an areTriggersActive() method that returns !Test.isRunningTest() || triggersActive. In each possible trigger we call that method and exit out or continue on depending on the result. Then in the unit test methods we call TestUtils.setTriggersActive(false) on a case by case basis where necessary and where we can get away with not firing trigger(s). In some test methods we turn it on and off repeatedly.

I like the Idea of something like adding Test.startDataSetup()/stopDataSetup() methods as an enhancement, but barring that our triggerActive approach has "worked", but it is awkward to say the least.

I'm wondering if anyone else has encountered this and how they handle it?

Idea here: https://sites.secure.force.com/success/ideaView?id=08730000000gUkgAAE


Attribution to: Peter Knolle

Possible Suggestion/Solution #1

I second superfell's question regarding queries.

In terms of DML, one of the design patterns that you might consider, for more reasons than just establishing a test environment without hitting limits, is to cache your DML and perform your updates after all of your triggers have finished processing.

As your trigger codebase continues to grow - and especially if you install unmanaged packages from the AppExchange (or managed private packages that haven't passed the security review yet) - you will be sharing limits with all trigger code in a given execution context.

Currently, Apex governor limits permit up to 10,000 records (per execution context) to be processed as a result of DML operations, regardless of how many DML statements you break this up into. And you only get 150 total DML statements, so this is a precious resource. A nice approach to upserting is to use lists of SObjects containing various types of SObjects. For example, the following is possible:

List<SObject> so = new List<SObject>();

for(Lead l : leads)
   so.add(l);

for(Contact c : contacts)
   so.add(c);

for(Account a : accounts)
   so.add(a);

for(User u : users)
   so.add(u);

Database.update(so);

In this design, all of your trigger code should reside in classes and your triggers pass in an SObject map to be appended to when down in the class code. The trigger code might look like this:

Map<Id, SObject> objectsToUpdate = new Map<Id, SObject>();
TriggerClass1.processTrigger(Trigger.new, Trigger.newMap, Trigger.old, Trigger.oldMap, objectsToUpdate);
TriggerClass2.processTrigger(Trigger.new, Trigger.newMap, Trigger.old, Trigger.oldMap, objectsToUpdate);

if(objectsToUpdate.size() > 0)
   Database.update(objectsToUpdate.values());

Aggregating your DML should feel a bit less awkward than implementing a special test-method shutoff valve. But again, this is just for DML - not queries.


Attribution to: Adam

Possible Suggestion/Solution #2

I like Adams proposed solution and I'd suggest looking there first.

That said, here is another alternative that has worked for me in the past and came about due to the config of various customer orgs. I.e. they had rules in place that I needed to work with.


In your test case when you are setting up the data and you don't want certain triggers and/or validation rules firing run as a special user who has a profile and LastName indicating test data is being setup:

    // Setting the Profile to one that contains 'Administrator' will bypass certain 
    // validation rules that check CONTAINS($Profile.Name, "Administrator")
    Profile p = [SELECT Id FROM Profile WHERE Name='System Administrator'];

    // Setting the Users LastName to a known value that
    // can be tested with UserInfo.getName()
    string label = System.Label.TestUser;

    string rand = String.valueOf(Math.rint(Math.random() * 10000)); 

    User u = new User(Alias='testUser', Email='test@example.com', 
        EmailEncodingKey='UTF-8', LastName=label, LanguageLocaleKey='en_US', 
        LocaleSidKey='en_US', ProfileId = p.Id, 
        TimeZoneSidKey='America/Los_Angeles', UserName='testUser' + rand + '@example.com');

    System.runAs(u) {
    // Set up test data here where Triggers and/or validation rules shouldn't fire.
    }

In validation rules that you don't want to run during test data set up check for:

CONTAINS($Profile.Name, "Administrator")

At the start of triggers:

if(Test.isRunningTest() && UserInfo.getName() == Label.TestUser) {
    System.debug('UserInfo.getName: ' + UserInfo.getName() + ' Label.TestUser: ' + Label.TestUser + ' Trigger bypassed');
    return;
}

Attribution to: Daniel Ballinger
This content is remixed from stackoverflow or stackexchange. Please visit https://salesforce.stackexchange.com/questions/322

My Block Status

My Block Content