SalesForce for Domino Dogs 3: Web Query Save Agents

“WebQuerySave” / “PostOpen” and all its siblings have been a bastion of Domino and Notes developments since time out of mind and indeed they exist in a near identical form in Salesforce but just called Triggers

Just like Notes/Domino has different events that let code ‘Do Stuff’ to records e.g. “WebQueryOpen”,”OnLoad”, “WebQuerySave” etc etc, Salesforce has the same sort of thing, in their case they are broken down into 2 parts: Timings and Events

Timings: Before and After

Before: The Event has been started but the record has not been saved, this maps basically to the “Query” events in Domino.

If you want to calculate fields and stuff and change values in the record you are saving, this is the time to do that, you don’t have to tell it to save or commit the records as you normally would, it will run a save after your code is run.

After: The record has been saved, all field values have been calculated, then the After event is run.

If you want to update other objects on the basis of this record being created/saved do it here, you can’t edit the record you are saving, but lots of useful bits such as the record id and who saved it are available in the After event {{You know that pain in the butt thing you sometimes have to do with Domino when you have to use the NoteID rather than that Document ID before a document is saved this gets round that issue.}}

Events: Insert, Update, Delete and Undelete

These are exactly what they say there, Insert is like a new document creation, Update is editing an existing document, etc etc

This then gives us a total set of different event types of:

  • before insert
  • before update
  • before delete
  • after insert
  • after update
  • after delete
  • after undelete{{Yes eagle eyes, there is no “before undelete”.}}

Now you can have a separate trigger for each of these events, but I have found that this bites you in the bum when they start to argue with each other and hard to keep straight when things get complex, so I just tend to have one trigger for all events and a bit of logic in it to determine what’s going to happen when

Here is my Basic template I start with on all my triggers

trigger XXXXTriggerAllEvents on XXXX (
    before insert,
    before update,
    before delete,
    after insert,
    after update,
    after delete,
    after undelete) {
            if(Trigger.isInsert || Trigger.isUpdate) {
                if (Trigger.isUpdate && Trigger.isAfter) {
                   MYScriptLibarary.DoStuffAfterAnUpdate(Trigger.New, Trigger.OldMap);
                } else if (Trigger.isInsert) {
                    //Do some stuff here to do with when a new document being create, like sending emals
                }
            }
}

As you can see you can determine what event you are dealing with by testing for “.isInsert” or “.isAfter” and then run the right bit of code for what you want, again I like to keep everything in easy sight, so use functions when ever I can with nice easy to understand names.
In the above case, I want to check a field after there has been an update to see if it has been changed from empty to containing a value. you can do this with the very very useful ‘Trigger.New’ and ‘Trigger.OldMap’) as you can see below

public with sharing class MYScriptLibarary {
    public static void DoStuffAfterAnUpdate(List<XXXX> newXXXX, Map<ID, XXXX> oldXXXX) {
                for (XXXX curentXXXX : newXXXX) {
                    if(!String.isBlank(curentXXXX.MyField) && String.isBlank(oldXXXX.get(curentXXXX.Id).MyField) ) {
                        system.debug('OMG!!! MYField changed DO SOMTHING');
                    }
                }
          }
}

So we are taking the list of objects{{You are best to handle to handle all code in terms of batches rather than the single document you are used to in Domino, we will handle batching in a later blog, but just take my word for it at the moment}} that have caused the trigger to run ie “Trigger.New”, looping through them and comparing them to the values in the Trigger.OldMap (which contain the old values) to see if things have changed.


So that is the theory over, you can see existing triggers by entering Setup and searching for “apex triggers”

BUT you cant make them from there, you make them from the Object you want them to act on.
Lets take the Case object for an example

In setup you search for case, and click on “Case Triggers” and then on “New”

That will give you the default trigger…. lets swap that out for the all events trigger I showed above

Better, then just click save and your trigger will be live. simples..
Now there is an alternative way to make triggers, and you do sometime have to use it when you want to create a trigger for an object that does not live in the setup, such as the attachment object.

You will first need to open the Developer Console up (Select your Name in the top right and select “Developer Console”), then select File –> New –> Apex Trigger

Select “attachment” as the sObject and give it a sensible name.

And now you can do a trigger against an object that normally you don’t see.

Final Notes:

1. Salesforce Process flows can fight with your triggers, if you get “A flow trigger failed to Execute” all of a sudden, go look to see if your power users have been playing with the process flows.
2. Make sure you have security set correctly, particularly with community users, both security profiles and sharing settings can screw with your triggers if you cant write or see fields.
3. As always make sure you code works if there are millions of records in Salesforce. CODE TO CATER TO LIMITS.

Presenting at MWLUG

Hooray!!!, I have been accepted to speak at mwlug this year

I will presenting 2 sessions

1) “The SSL Problem And How To Deploy SHA2 Certificates” with Gabriella Davis

This session went down well at connect and we are hoping that Austin will love this changed and updated version, Gab is awesome to present with.

2) “Salesforce for Domino Dogs”

Now If you saw this at Engage I urge you to come again as this is an evolving presentation that changes dramatically with each iteration (depending on presenters and the ever changing world of Salesforce)

  • Version 1: Balanced Architect (Engage 2016)
  • Version 2: Happy Evangelist (DNUG 2016)
  • Version 3: Rabid Developer <– This is the one I will be presenting

It will be my first trip out there and beside from presenting I will be manning the stand (The rest of the team are insisting I wear a shirt and everything).

P.S.

I’m looking for someone to room share/split cost with ( I sleep on the floor so there never seems to be a point to getting a room for myself ) …. I can provide references…

New Platform Type New client Type

I have been doing a lot of cloud dev in recent months, not Internet facing work (been doing that for over a decade), but proper work on various cloud platforms (4+ of them) and they have turned out to require a shifting of mental gears, not from a technical aspect nor from a platform or paradigm shift (saas) but from dealing with a different type of client point of view.

Now that seems odd, sure your cloud clients are nearly all the business rather than IT, but lots of my work is direct with the business and it is often a relief to do so as you can deliver a product that best matches the exact needs of the people that use it.

So why?

After a lot of head scratching and reviewing of the projects I have come up with the following reasons

  1. Sales before IT: With cloud based projects the sales team have very often just finished with the customer, so the customer arrives with the expectation that the platform is a PERFECT fit for everything they might want and it might just needs a tiny update to match their needs and that the update will only take a hour or 2…. but as is always true the devil it in the details, so when we look at their requirements and say that it’s going to take a week of hard work and then they will have to spend time testing, you have suddenly upset both their time frames and budgets. {{Both Matt and Julian have been REALLY serious about avoiding this kind of thing on LDCVia and have made the phase “it will be easy, it will only take and hour or so” a capital offence.}}
  2. Client reflexes: A lot of the cloud clients are sales/marketing people or from another branch where haggling and negotiations are built-in, these people live in a fast moving world and have never liked the iterative and somewhat slow moving nature of traditional IT projects “I just want it to work how I want”. for such people paying “not a penny more” and getting more than you paid for are Good Things. A side-effect of this is that such clients are quick to anger when they require a change that will take more money or time. Small changes are non-stop with cloud projects where the client can see the work as it is done: I have heard phrases like “Just one more thing”, “It will only take 5 minutes”, “I had assumed” and — my absolute favourite — “It’s just common sense, it should do XXX,” more over the last six months than I have in the previous six years combined.
  3. They have already paid: Decent cloud services are not cheap and the clients have often already paid a fair lump before they get to customising their environment, so every penny you want is money they feel is an extra, very much like someone at a hotel, we all enjoy the extra stuff but are really unhappy to see it on our bill. {{One thing that I have found after multiple quotes, is that honesty works even less with cloud based quotes than it does with traditional IT quotes, I have had at least 3 occasions where I was genuinely puzzled that a quote I had done had not got picked for a spec, and a much cheaper quote was accepted, on all occasions I questioned the ability to deliver on a quote that low (even using offshore staff) and have been told each time that the competition just use the quote to get in the door then nickle and dime the project to death…. I hate that, I really hate it >:(}}
  4. Re-tooling: All of the new cloud platforms are feature rich and do a lot of things very quickly but that is often within the boundaries of a given tool or feature, I can see why this is so {{Hell I’m one of the co-developers of a cloud platform and when we are coding new stuff it always with an eye to “how can we spend our time on stuff that will get the most use”.}}, you are aiming for the old 80/20 rule , so when a customer says “I just need it to do xxxxx” and you simply can’t make the tool do that, have then used a different tool and spent a load of time reconfiguring the new tool to look like the old one so you can add the one missing feature. it does not matter how clever that is or how hard you have worked, from a clients point of view you have turned a simple 5 min job into a 5 day job you and are from their point of view are not providing value for money {{BTW the phrase “I’m trying to do what you asked” does not help here.}}.
  5. Client rapport: Most cloud customisations are quick things, as a developer you have had very little time to get to know the client, what they mean vs what they say, what pressures they are under, if they have budget for these changes etc etc, and they often just view you as someone just getting in the way of their shiny new cloud platform.

These relatively new changes in the client developer relationship mean you have to change your way of dealing with clients

So how do we fix this??

This is the hard bit, I have laid awake at night for a number of nights, wondering hard how to fix this, my time honoured method of working my guts off having failed me.

So far I have come up with:

  1. Make it human: Try and make the relationship one between humans, site visit if possible, Skype video if not, so clients feel that they are working with people and more importantly people who’s professional opinion they can trust.
  2. Speed up interactions: Not speed up coding as that has actually not got much faster with the new platforms, but speed-up the feedback you give clients, a quick Agile Scum with a client each morning can head a lot of bad things off and make them feel far more in the loop {{But be firm that the meeting is only of keeping everyone on track, it is not a place to add a few new requirements in to the spec every day (ohhh boy don’t they love to try that), and that each person does only get 2-5 mins, if they want a longer meeting, book it later in the day.}}, use this to also keep them informed on how much the client has used / left in their bank / project pool, even it they have pushed for a fixed price, addionally the clients can cut their losses if a small change is going to take a long time.
  3. Be firm: I’m rubbish at this part but with cloud clients there is an underlying expectation that you get loads for free, and that includes any changes they might want to make after a spec has been agreed, there is a middle groud between “nickle and diming” and being used as a doormat, try and get a rapport with your client so that you both know where that is.

Any one else got any good ideas?

SalesForce for Domino Dogs 2: Scheduled Agents

Welcome to the second part of the Salesforce for Domino Dogs series. This one is a monster, but don’t worry we will be revisiting and clearing up some of the complex parts in other blog posts. What was a simple thing in Domino is quite complex in Salesforce and for a variety of very good reasons. So… scheduled agents.


Scheduled Agents: These little sods are the pain of many a Domino admin’s life. Personally I blame them for the lock-down of many a Domino server from the free-for-all that was so empowering to users, but sometimes there is no other way to get round limits or deal with certain triggered activities.

In Salesforce scheduled processes are a bit more complex than you might be used to, and this is not just a Salesforce thing, but a cloud thing—no cloud provider wants their platform just churning along in the background eating up cycles and I/O time.

So let’s break it down:

  1. The code that does stuff
  2. The scheduled task that the code sits in
  3. The schedule itself

1) The Code

So this CAN just be any bit of Apex you want, but most of the time you will actually end up using batch apex. Batch Apex is a whole set of articles in its own right, but in this case it’s just a way of getting round the Apex limits.

… hmmm that does not help. OK let me explain:

You know how with Domino scheduled agents, they will only run for so long before the agent manager shuts it down? This is to stop you writing rubbish code that screws up the system. Apex has a load of limits just like that, and the one that hits quite often is the limit that you can only send 10 emails using Send() in a given transaction (you can send 1000 bulk email per day). To get round this limit you have to “batch”, or break up your code into chucks. In Domino this would be like saying we want to process a whole view’s-worth of documents, but in chunks of say five documents at a time.

An empty bit of batch apex looks like this:

global class NotifiyAllUsersInAView implements Database.Batchable<sObject> {
    // The start method is called at the beginning of the code and works out which objects this code is goign to run agains.
    // It uses an SOQL query to work this out
    global Database.QueryLocator start(Database.BatchableContext BC){
    }
    // The executeBatch method is called for each chunk of objects returned from the start function.
    global void execute(Database.BatchableContext BC, List<Contact> scope){
    }
    //The finish method is called at the end of a sharing recalculation.
    global void finish(Database.BatchableContext BC){
    }
}

Let’s take it apart. First we will use the “start” function to get the list of objects we want to work through, so we take the empty function:

    // The start method is called at the beginning of the code and works out which objects this code is goign to run agains.
    // It uses an SOQL query to work this out
    global Database.QueryLocator start(Database.BatchableContext BC){
    }

… and add a search to say get all “contacts” in Salesforce. We only need the email address for these contacts{{When you get an object via SOQL, you ask for all the fields you want, this is not like getting a Notes Document you don’t just get access to all the document fields automatically.}} so we add that as one of the fields which it gives us:

    // The start method is called at the beginning of the code and works out which objects this code is going to run against
    // It uses an SOQL query to work this out
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator([SELECT Id, Email FROM Contact]);
    }

Next we want the empty “execute” function which will do whatever we want with each chunk of objects it is sent:

    // The executeBatch method is called for each chunk of objects returned from the start function
    global void execute(Database.BatchableContext BC, List<Contact> scope){
    }

So in this horrible bit of code, the chunk of objects is passed in a reference called “scope” — we are then just iterating the objects and sending an email for each contact (you can see the email address stipulated in the “start” being passed in using “c.Email”):

    // executeBatch method is called for each chunk of objects returned from the start function
    global void execute(Database.BatchableContext BC, List<Contact> scope){
      for(Contact c : scope){
          Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
          String[] toAddresses = new String[] {c.Email};
          mail.setToAddresses(toAddresses);
          mail.setSubject('Another Annoying Email');
          mail.setPlainTextBody('Dear XXX, this is another pointless email you will hate me for');
          Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
       }
    }

Finally we need an empty “finish” function which runs when all the batches are done:

    //The finish method is called at the end of a sharing recalculation.
    global void finish(Database.BatchableContext BC){
    }

So let’s send a final email notification to the admins:

    //The finish method is called at the end of a sharing recalculation
    global void finish(Database.BatchableContext BC){
        // Send an email to admin to say the agent is done.
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {emailAddress};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Agent XXX is Done.');
        mail.setPlainTextBody('Agent XXX is Done.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
    }

Put it all together and you get:

global class NotifiyAllUsersInAView implements Database.Batchable<sObject> {
    // String to hold email address that emails will be sent to.
    // Replace its value with a valid email address.
    static String emailAddress = 'admin@admin.com';
    // The start method is called at the beginning of the code and works out which objects this code is goign to run agains.
    // It uses an SOQL query to work this out
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator([SELECT Id, Email FROM Contact]);
    }
    // The executeBatch method is called for each chunk of objects returned from the start function.
    global void execute(Database.BatchableContext BC, List<Contact> scope){
      for(Contact c : scope){
          Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
          String[] toAddresses = new String[] {c.Email};
          mail.setToAddresses(toAddresses);
          mail.setSubject('Another Annoying Email');
          mail.setPlainTextBody('Dear XXX, this is another pointless email you will hate me for');
          Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
       }
    }
    //The finish method is called at the end of a sharing recalculation.
    global void finish(Database.BatchableContext BC){
        // Send an email to admin to say the agent is done.
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {emailAddress};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Agent XXX is Done.');
        mail.setPlainTextBody('Agent XXX is Done.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
    }
}

So now we need to call this code

2) The Scheduled “Agent”

The code we have just written won’t run in a schedule on its own, we need to wrap it up in a bit of code that can run on a schedule and decide how big the chunks will be. In this case they can’t be more than 10 as we will hit the Apex limits for sending emails. An empty schedule wrapper looks like this (I have called mine ‘Scheduled_Agent’ but you can call it anything):

global class Scheduled_Agent implements Schedulable{
    global void execute (SchedulableContext SC){
    }
}

Now let’s create a new instance of the batchable code we created in section 1, tell it we want it to run in batches of 5 records or objects, and tell it to execute.

global class Scheduled_Agent implements Schedulable{
    global void execute (SchedulableContext SC){
      Integer batchSize = 5;
      NotifiyAllUsersInAView batch = new  NotifiyAllUsersInAView();
      database.executebatch(batch , batchSize);
    }
}

Code bit all done!

3) The Schedule

Now it comes time to actually schedule the code to run at a certain time, you can set this up via the user interface by going into Setup, searching for “Apex Classes”, and selecting the result:


Select “Scheduled Apex”


As you can see, the options are limited to, at most, a daily run—you can’t specify it to be any more frequent. However, we need to run to more often than that{{Well we don’t but you just know someone will demand it to be sent more often.}}.
First open up your developer console, by selecting your name on the top right and picking it from the drop-down.


Now open up the “Execute Anonymous Window” from the debug menu.


You can now run Apex code manually, and as such you can schedule jobs with a load more precision using a Cron String. In this case we want to run the agent every 10 mins within the hour, so we create a new instance of our “Scheduled_Agent” scheduled class and schedule it appropriately:


Click “Execute” and you can see the jobs have been scheduled. It should be noted that you can only have 100 of these in your org and this uses up 6 of them, so some planning would be good.

And there you go, scheduled agents. Let the legacy of horror continue!