Find your content:

Search form

You are here

Control Fields vs Heap "Mock" Fields for Branching Trigger Behavior

 
Share

I've seen solutions that talk about adding control fields to allow for clever trigger branching in problems like this. I know that works, but it annoys me (feels architecturally wrong) to create such fields if it can be avoided. I've also seen solutions that have to do with using static variables for a similar purpose, but I have found occasions where one global flag is not good enough because I need the flag per record. I've tried both out and I think I want something in the middle. Something like this:

global without sharing class HeapControl {
    private static Map<Id, Set<String>> recordTags = new Map<Id, Set<String>>();
    public static void AddRecordTag(Id rowId, String tag) {
        if (recordTags.containsKey(rowId)) {
            Set<String> tagsForThisRow = recordTags.get(rowId);
            if (!tagsForThisRow.contains(tag)) {
                tagsForThisRow.add(tag);
            }
        } else {
            recordTags.put(rowId, new Set<String> { tag });
        }
    }
    public static boolean IsRecordTagged(Id rowId, String tag) {
        if (recordTags.containsKey(rowId)) {
            Set<String> tagsForThisRow = recordTags.get(rowId);
            if (tagsForThisRow.contains(tag)) {
                return true;
            }
        }
        return false;
    }

    @IsTest 
    public static void HeapDemo() {
        Account a = new Account(Name = 'Testing');
        insert a;
        HeapControl.AddRecordTag(a.Id, 'Control-Field-Name-1');
        a.Name = 'Testing2';
        update a;
        // A trigger that fired on Account Update could check the HeapControl for the record like this
        system.assert(HeapControl.IsRecordTagged(a.Id, 'Control-Field-Name-1'));
    }
}

I'd like a pro-con analysis of using real control fields vs mock control fields on the heap (doesn't have to be my code, anything along the lines of static Map<Id, ...>, if you know of a field-tested version of this code please share).

In my not-yet-field-tested opinion, the control field concept evolved out of a workflow world, but once you decide to drop down to triggers the heap will serve you better--at least you don't have to create control fields for every little trigger branch.

I apologize this isn't standard Q&A format--but this issue has been burning in my mind for months now and I think there are a lot of Salesforce developers (writing business process triggers in particular) that wrestle with this regularly. Thanks in advance for sharing your wisdom and experience!


Attribution to: twamley

Possible Suggestion/Solution #1

Good quesiton, but what is the real question? The key phrase from your question for me is 'business process triggers'. If I may, I think the discussion needs to start from a application layering perspective, or put more specifically 'where do we put our business process logic?'. Before I get on to that let me say this about triggers...

The thing about triggers is... they respond to data centric actions, period. If you view the top to bottom architecture of your Force.com application, its easy to forget your still dealing with the database tier when writing code at the trigger level. If you look at the rich level of API support and an native UI around this database tier. You will soon realise your effectively developing an app on a database schema that is by default open, like it or not. You have to service this way of interacting with your app. Take Salesforce Touch for example, this provides your clients a rich mobile/tablet client for free! Then the new REST API, the list goes on. You only get the benefit of these if you embrace letting your customers / developers access this tier safely and clearly (so good database design is even more important ironically). So when you look at it from this angle, its important to consider carefully any requirements that lead to design patterns, validation rules etc that might impact the use of your 'open database tier'. And don't get me wrong, there are some good reasons to, but you must design firstly for things to be open and discuss carefully what the implications are of over complicating or closing off parts of your database schema interaction.

So that all sounds good, but why do I still find it hard sometimes to implement what I want? Well, if your application is typically "data centric" in its purpose, you'll generally have an easier job. A user interactions, much like Salesforce CRM, are typically tasks surrounding the scope and lifetime of individual records. So triggers, layouts, record types really start to come into their own here. But if you have an application that is in part 'data centric' (for standing / master data) and part 'process/task centric' where does that leave you? Enter SOA...

Service Orientated Architecture and Control Fields? Unless you're solely a data centric application. Your going to have to find a place for your clients (and by 'clients' I am not just talking about human consumers here) to consume / access your business process logic. Having them tweak Control Fields at your schema level may make sense and there is scope for it, if you can work it nicely into the schema. But over use of it, along with having your clients dependent on a deeper understanding of your schema. Will introduce voodoo into your schema and increase the barrier to entry for your application interactions, developer, end user or platform wise. So to avoid users poking around trying to reverse engineer your VF logic, control fields, why not consider exposing a service layer in your applications for such logic?

So how do I do that then? REST has recently taken hold over Web Services, primarily because of the aforementioned 'barrier to entry'. It is easy to document and if done correctly is self describing, much like the 'data centric' aspects of your application utilising Custom Objects. So for business proceses, for me REST (I include JavaScript Remoting in this) gets my vote for exposing business process logic to your clients (be they human or machine). If you adhere strictly to this approach (which requires a SOA based design process for your app) then you'll end up with a rich set of API's for you and your clients to consume. Which in this day and age of Cloud computing is pretty much a must! So while you might argue this is not strictly SOA (as yes Salesforce does handle a lot of the mechanics with oAuth etc). I feel there is room for taking a SOA based design and architecture approach to designing Force.com applications.

So OK, how would I implement the code in the question above? Developing a generic Apex solution is really a solution for a problem that I don't think should be solved generically or at the Apex level (see my thoughts below). Generally to much of this approach is teaching your database tier code about the business processes that should be implemented above it. You should let your business process tier do the driving and expose that as a service for your clients / callers to consume (in addition to those already provided by Salesforce btw!). The trick then is marketing and supporting the strict design, development and consumption internally (for your own client developers, be they VF, mobile or otherwise) and externally of them. Again don't get me wrong we should absolutely fill our triggers with great value add logic, but it should know its place.

Control Fields : Heap vs Custom Field : That said, when taking the Control Field approach, and as I describe above is valid, when it 'fits' with your schema and thus API / native UI experience sensibly. When implementing it I am in favour in the first instance of designing it in into the application schema as a field. This way everyone knows whats going on. Again the schema is open and not declaring it (by managing internally) only leads to frustration, low barrier to entry due to higher complexity (internally and externally). I've also been burnt a few times with Static, initialisation and state carry over, while I'm not 100% sure, for example, I think Batch Apex serialises statics over 'execute' invocations, which can catch you out at times.

Application Enterprise Patterns on Force.com

Here is a diagram i presented at a recent Dreamforce session about Enterprise Application patterns. The sessions slides takes a simple scenario and walks through, what might be described as a 'consolidated' or 'merged' approach typically seen when people start out or smaller applications, and again this approach has its place as well. What I followed this in my session with was a more layered approach using some Application Enterprise patterns along the way. I'm pleased to say it got quite a good reception and has since generated a steady stream of interest (I've got a draft blog posting on it in the working btw). You can find the Github repo here, with more info on the patterns. The slides are here.

enter image description here

NOTE: That Trigger invocation is not shown here, the SObjectDomain class takes care of routing the CRUD events from the triggers to the 'Domain' classes representing the Standard or Custom Object behaviour your application is implementing. This pattern enforces development of appropriate logic for this context. Leaving the Service layer to doing any process level orchestration.


Attribution to: Andrew Fawcett

Possible Suggestion/Solution #2

Is it wise to use system methods when it comes to using static variables for recursive?

http://www.salesforce.com/us/developer/docs/apexcode/Content/apex_methods_system_system.htm

Using IsBatch() and ISfuture() context I feel the static variables need to be used to can be minimized

Instead of doing

public class abc{
if(static_variable == false){
// do something
static_variable = true;
}
}

try:

public class abc{
if(isBatch() || Isfuture()){

}
}

Just in case there are multiple batch jobs that are being executed Instead of setting multiple static variable references, this way one check on the class to find if there are batch/future process.


Attribution to: Rao

Possible Suggestion/Solution #3

At the outset, the way I look at it, the first line of the question is a bit of an apples with oranges comparison, i.e. use of Control Fields in a Workflow - Trigger distributed situation Vs in a Trigger only scenario.

Analysing this problem in parts may help answer each of the two facets, one at a time.

One - the interplay of Workflows and Triggers to implement Business Logic. It is up for debate how logic should be distributed, if not all of it can be accomplished in a trigger. Should workflows be used to implement as much is feasible, and only the absolute can't do's must live in a trigger. In my opinion yes, unless the resulting complexity is insanely great. Using declarative features provisioned by the platform allows for taking advantage of any of the forthcoming features of the platform, for eg. cross object workflows precluded the needs for at least some of those early-days cross object triggers.

If you need to straddle logic across workflows and triggers, the 'heap' is the database, and control fields end up becoming the 'static' heap variables.

This is the only situation in which, in my opinion, control fields may be advocated.

The other facet is the use cases for use of control fields - If the resulting complexity of a distributed operation is too great, or if all of it can only be implemented in a trigger, than static 'heap' variables make the most sense, I can't see the needs for any entity specific control variables, although there may be a limited case for User level control fields, particularly in cases, where doing so may prove more efficient to streamline data integration / migration processes.

Immediate Implications

Just talking a little bit about your other question that led to this thread, you are looking at the use of Control Field to distribute logic between a trigger and a workflow vs customising the input using Visualforce, so you can clearly demarcate the source of the change.

Going down a VisualForce route (which I have done for cases like OpportunityContactRoles, where triggers / workflows are not possible) means investing in a test class besides the cost of maintainability and the danger of subverting any new features that become available in out of the box page layouts, eg. inline editing that came to PageLayouts, and it was some time, before that became available in Visualforce. Declarative should always be the preferable approach, unless there are really strong motivations for going custom, as more often than not, it ends up being more expensive in the long run (the cost of maintainability, test classes and deployment inclusive)


Attribution to: techtrekker
This content is remixed from stackoverflow or stackexchange. Please visit https://salesforce.stackexchange.com/questions/4409

My Block Status

My Block Content