How does Unit Testing fit into Separation of Concerns?
The answer to this is simple, without Separation of Concerns, there is no unit testing, it just simply isn’t possible. To unit test you need to be able to create stub (mock) classes to send into your class you are testing via dependency injection (or through the use of a factory, more on this in the next section). If all of your concerns are in one class (DML transactions, SOQL queries, service method, domain methods, etc) you cannot fake anything. Let’s take a look at a couple examples to illustrate this problem:
Unit Testing a class with SoC Implemented
//This is the class we would be testing
public with sharing class SoC_Class
{
private DomainClass domainLayerClass;
private SelectorClass selectorLayerClass;
public SoC_Class(){
//This is calling our private constructor below
this(new domainLayerClass(), new selectorLayerClass());
}
//Using a private constructor here so our test class can pass in dependencies we would
//like to mock in our unit tests
@TestVisible
private SoC_Class(DomainClass domainLayerClass, SelectorClass selectorLayerClass){
this.domainLayerClass = domainLayerClass;
this.selectorLayerClass = selectorLayerClass;
}
public List<Case> updateCases(Set<Id> objectIds){
//Because of our dependency injection in the private constructor above we can
//mock the results of these class calls.
List<Case> objList = selectorLayerClass.selectByIds(objectIds);
if(!objList.isEmpty()){
List<Case> objList = domainLayerClass.updateCases(objList);
return objList;
}
else{
throw new Custom_Exception();
}
}
}
//This is the class that we build to unit test the class above
@IsTest
public with sharing class SoC_Class_Test
{
@IsTest
private static void updateCases_OppListResults_UnitTest(){
//Creating a new fake case id using the IdGenerator class. We do this
//to avoid unnecessary dml insert statements. Note how the same id is used
//everywhere.
Id mockCaseId = fflib_IDGenerator.generate(Case.SObjectType);
//Creating a set of ids that we pass to our methods.
Set<Id> caseIds = new Set<Id>{mockCaseId};
//Creating the list of cases we'll return from our selector method
List<Case> caseList = new List<Case>{new Case(Id = mockCaseId, Subject = 'Hi',
Status = 'New', Origin = 'Email')};
List<Case> updatedCaseList = new List<Case>{new Case(Id = mockCaseId, Subject
= 'Panther', Status = 'Chocolate', Origin = 'Email')};
//Creating our mock class representations by using the ApexMocks class's mock
//method and passing it the appropriate class type.
fflib_ApexMocks mocks = new fflib_ApexMocks();
DomainClass mockDomain = (DomainClass) mocks.mock(DomainClass.class);
SelectorClass mockSelector = (SelectorClass) mocks.mock(SelectorClass.class);
//After you've setup your mocks above, we need to stub (or setup the expected
//method calls and what they would return.
mocks.startStubbing();
//This is the actual selectByIds method that we call in the
//createNewOpportunities method that we are testing
//Here we are setting up the fake return result it will return.
mocks.when(mockSelector.selectByIds(caseIds)).thenReturn(caseList);
mocks.when(mockDomain.updateCases(caseList)).thenReturn(updatedCaseList);
//When you are done setting these up, DO NOT FORGET to call the stopStubbing
//method or you're gonna waste hours of your life confused
mocks.stopStubbing();
Test.startTest();
//Passing our mock classes into our private constructor
List<Case> updatedCases = new SoC_Class(mockDomain,
mockSelector).updateCases(caseIds);
Test.stopTest();
System.assertEquals('Panther', updatedCases[0].Subject,
'Case subject not updated');
//Verifying this method was never called, we didn't intend to call it, so
//just checking we didn't
((Cases)mocks.verify(mockDomain, mocks.never().description('This method was
called but it shouldn\'t have been'))).createOpportunities();
//Checking that we did indeed call the createTasks method as expected.
((Cases)mocks.verify(mockDomain)).updateCases(caseList);
}
}
Above you can see we are passing in fake/mock classes to the class we are testing and staging fake return results for the class methods we are calling. Thanks to separating out our concerns this is possible. Let’s take a look at how impossible this is without SoC in place.
Unit Testing a class without SoC Implemented
//This is the class we would be testing
public with sharing class No_SoC_Class
{
public List<Case> updateCases(Set<Id> objectIds){
//Because of our dependency injection in the private constructor above we can
//mock the results of these class calls.
List<Case> objList = [SELECT Id, Name FROM Case WHERE Id IN: objectIds]
if(!objList.isEmpty()){
for(Case cs: objList){
cs.Subject = 'Panther';
cs.Status = 'Chocolate';
}
update objList;
return objList;
}
else{
throw new Custom_Exception();
}
}
}
@IsTest
public with sharing class No_SoC_Class_Test
{
@TestSetup
private static void setupData(){
Case newCase = new Case(Subject = 'Hi', Status = 'New', Origin = 'Email');
insert newCase;
}
@IsTest
private static void updateCases_CaseListResults_IntegrationTest(){
Set<Id> caseIds = new Map<Id, SObject>([SELECT Id FROM Case]).keySet();
List<Case> updatedCases = new No_SoC_Class().updateCases(caseIds);
System.assertEquals('Panther', updatedCases[0].Subject, 'Case subject not
updated');
}
}
You can see above we did no mocking… it wasn’t possible, we had no way of passing in fake/mock classes to this class at all so we had to do an integration test where we create real data and update real data. This test will run considerably slower.
How do I transition my Existing Code to start leveraging SoC so I can use Unit Test Mocking?
It’s not a simple path unfortunately, there is a lot of work ahead of you to start this transition, but it is possible. The key is to start small, if you are a tech lead the first thing you need to do is find the time to train your devs on what SoC and mocking is and why you would use it. It’s critical they understand the concepts before trying to roll something like this out. You cannot do everything as a lead even if you’d like to, you need to build your team’s skillset first. If you aren’t a lead, you first need to convince your lead why it’s critical you start taking steps in that direction and work to get them onboard with it. If they ignore you, you need a new lead… After accomplishing either the above you should do the following:
1) Frame your situation in a way that the business arm of your operation understands the importance of altering your code architecture to leverage SoC and unit testing. This is typically pretty easy, just inform them that by spending a few extra points per story to transition the code you are going to gain more robust testing (resulting in less manual tests) and that the code will become more flexible over time, allowing for easier feature additions to your org. Boom, done, product owner buy in has been solidified. Ok, lol, sometimes it’s not quite this easy, but you know your business people, give them something they won’t ignore, just be careful to not make yourself or your team sound incompetent, don’t overcommit and make promises you can’t keep and frame it in words business people understand (mostly dollar signs).
2) Start small, take this on a story by story basis. Say you have a new story to update some UI in your org, with business owner buy-in, tack on a few extra points to the story to switch it to start using SoC and Unit Testing. Then, MAKE SURE YOUR TESTS ARE INCREDIBLE AND YOU TRUST THEM! I say this because, if your tests are incredible you never have to ask for permission again… I mean think about it, wtf does the average business person know about code. As long as I don’t introduce any new bugs I could change half the code base and they’d never even know. If your tests are golden, your ability to change on a whim is as well. Make those test classes more trustworthy than your dog (or cat I guess… but I wouldn’t trust a cat).
3) Over time, your transition will be done… it might take a year, it might take 5 years, depends on how busted your codebase was to begin with. Hopefully, by the time you are done, you have an extremely extensible codebase with tests you can trust and you never have to ask for permission to do a damn thing again.
What is the Separation of Concerns Design Principle?
Basically separation of concerns is the practice of putting logical boundaries on your code. Putting these logical boundaries on your code helps make your code easier to understand, easier to maintain and much more flexible when it needs to be altered (and every code base ever has to be altered all the time).
In the Salesforce Ecosystem there are three major areas of concern we ideally should separate our code into. They are the following:
The Service Layer:
The Service Layer should house 100% of your non-object specific business logic (object specific logic is most often handled by the domain layer). This is, the logic that is specific to your organizations specific business rules. Say for instance you have a part of your Salesforce App that focuses on Opportunity Sales Projections and the Opportunity Sales Projection App looks at the Oppotunity, Quote, Product and Account objects. You might make an OpportunitySalesProjection_Service apex class that houses methods that have business logic that is specific to your Opportunity Sales Projection App. More information on the Service Layer here.
The Domain Layer:
The Domain Layer houses your individual objects (database tables) trigger logic. It also houses object specific validation logic, logic that should always be applied on the insert of every record for an object and object specific business logic (like how a task my be created for a specific object type, etc). If you used the Account object in your org you should create a Domain class equivalent for the Account object through the use of a trigger handler class of some sort. More information on the Domain Layer here.
The Selector Layer:
The Selector Layer is responsible for querying your objects (database tables) in Salesforce. Selector layer classes should be made for each individual object (or grouping of objects) that you intend to write queries for in your code. The goal of the selector layer is to maintain query consistency (consistency in ordering, common fields queried for, etc) and to be able to reuse common queries easily and not re-write them over and over again everywhere.
Why is it Useful?
There are many benefits to implementing SoC, most of which were outlined above, but here are the highlights:
1) Modularizes your code into easy to understand packages of code making it easier to know what code controls what, why and when.
2) Massively reduces the amount of code in your org by centralizing your logic into different containers. For instance, maybe you currently have 13 different apex controllers that house similar case business logic. If you placed that business logic into a service class and had all 13 apex controllers call that service class instead your life would be a whole lot simpler. This can get a lot more abstract and turn into absolutely unprecedented code reduction, but we have to start somewhere a bit simpler.
3) Separation of Concerns lends itself to writing extremely well done and comprehensive Unit Tests. It allows for easy dependency injection which allows you to, in test classes, mock a classes dependent classes. We’ll go over this more when we get to the Unit testing and Apex Mocks section of this tutuorial, but if you want a quick and easy explanation, please feel free to check out my video covering dependency injection and mocking in apex.
How does the Apex Common Library help with SoC?
The Apex Common Library was quite literally built upon the three layers outlined above. It provides an unrivaled foundation to implement SoC in your Salesforce org. When I started this tutorial series I was not convinced it was the absolute best choice out there, but after hundreds of hours of practice, documentation, experimentation with other similar groupings of libraries, etc I feel I can confidently say (as of today) that this is something the community is lucky even exists and needs to be leveraged much more than it is today.
Example Code
All of the code examples in this repo are examples of SoC in action. You can check the whole repo out here. For layer specific examples check out the layer specific pages of this wiki.
Apex Mocks is unit test mocking framework for Apex that was inspired by the very popular Java Mockito framework. The Apex Mocks framework is also built using Salesforce’s Stub API which is a good thing, there are mocking frameworks in existence that do not leverage the Stub API… you shouldn’t use them as they are considerably less performant. In fact, as a remnant of the past, Apex Mocks still has an option to not use the Stub API, don’t use it.
To make this super simple, Apex Mocks is an excellent framework that allows you to not only create mock (fake) versions of your classes for tests, but also allows you to do a number of other things like create mock (fake) records, verify the amount of times a method was called, verify method call order and lots more. There is not another mocking framework for the Apex language that is anywhere near as robust.
Do you have to use the Apex Common Library to use Apex Mocks?
While you don’t have to use the Apex Common Library to use Apex Mocks, they are built to work extremely well together. Without the use of the Apex Common Library you will need to ensure that all of your classes utilize dependency injection. While this isn’t a massive feat, it’s something you have to be cognizant of. If you decide to use the Apex Common library and leverage the the fflib_Application class factory you can leverage the setMock methods to avoid needing to setup all your classes using dependency injection. It’s pretty convenient and makes the whole situation a bit easier. We’ll take a look at both methods below:
Using Apex Mocks without Apex Common Example
//Class we're testing
public with sharing class SoC_Class
{
private DomainClass domainLayerClass;
private SelectorClass selectorLayerClass;
public SoC_Class(){
//This is calling our private constructor below
this(new domainLayerClass(), new selectorLayerClass());
}
//THE MAJOR DIFFERENCE IS HERE! We're using a private constructor here so our test class can pass in dependencies we would
//like to mock in our unit tests
@TestVisible
private SoC_Class(DomainClass domainLayerClass, SelectorClass selectorLayerClass){
this.domainLayerClass = domainLayerClass;
this.selectorLayerClass = selectorLayerClass;
}
public List<Case> updateCases(Set<Id> objectIds){
//Because of our dependency injection in the private constructor above we can mock the results of these class calls.
List<Case> objList = selectorLayerClass.selectByIds(objectIds);
if(!objList.isEmpty()){
List<Case> objList = domainLayerClass.updateCases(objList);
return objList;
}
else{
throw new Custom_Exception();
}
}
}
//Test class
@IsTest
public with sharing class SoC_Class_Test
{
@IsTest
private static void updateCases_OppListResults_UnitTest(){
Id mockCaseId = fflib_IDGenerator.generate(Case.SObjectType);
Set<Id> caseIds = new Set<Id>{mockCaseId};
List<Case> caseList = new List<Case>{new Case(Id = mockCaseId, Subject = 'Hi', Status = 'New', Origin = 'Email')};
List<Case> updatedCaseList = new List<Case>{new Case(Id = mockCaseId, Subject = 'Panther', Status = 'Chocolate', Origin = 'Email')};
fflib_ApexMocks mocks = new fflib_ApexMocks();
DomainClass mockDomain = (DomainClass) mocks.mock(DomainClass.class);
SelectorClass mockSelector = (SelectorClass) mocks.mock(SelectorClass.class);
mocks.startStubbing();
mocks.when(mockSelector.selectByIds(caseIds)).thenReturn(caseList);
mocks.when(mockDomain.updateCases(caseList)).thenReturn(updatedCaseList);
mocks.stopStubbing();
Test.startTest();
//THE MAJOR DIFFERENCE IS HERE! We are passing in our mock classes we created above to the private constructor that is only
//visible to tests to leverage dependency injection for mocking.
List<Case> updatedCases = new SoC_Class(mockDomain, mockSelector).updateCases(caseIds);
Test.stopTest();
System.assertEquals('Panther', updatedCases[0].Subject, 'Case subject not updated');
((Cases)mocks.verify(mockDomain, mocks.never().description('This method was called but it shouldn\'t have been'))).createOpportunities();
((Cases)mocks.verify(mockDomain)).updateCases(caseList);
}
}
If you take a look at the above SoC_Class class you can see that we are leveraging the concept of dependency injection to allow our SoC_Class_Test test class the ability to inject the mock classes during our unit test. This is the key difference. All of your classes MUST LEVERAGE DEPENDENCY INJECTION to be able to incorporate unit test mocking into your codebase. Now let’s take a look at how to do the same thing using the Apex Common Library and the fflib_Application class.
Using Apex Mocks with Apex Common Example
//Example Application factory class. This is only here because it is referenced in the classes below.
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Case.SObjectType,
Contact.SObjectType,
Account.SObjectType,
Task.SObjectType}
);
public static final fflib_Application.ServiceFactory service =
new fflib_Application.ServiceFactory(
new Map<Type, Type>{
Task_Service_Interface.class => Task_Service_Impl.class}
);
public static final fflib_Application.SelectorFactory selector =
new fflib_Application.SelectorFactory(
new Map<SObjectType, Type>{
Case.SObjectType => Case_Selector.class,
Contact.SObjectType => Contact_Selector.class,
Task.SObjectType => Task_Selector.class}
);
public static final fflib_Application.DomainFactory domain =
new fflib_Application.DomainFactory(
Application.selector,
new Map<SObjectType, Type>{Case.SObjectType => Cases.Constructor.class,
Contact.SObjectType => Contacts.Constructor.class}
);
}
//Class we're testing
public with sharing class Task_Service_Impl implements Task_Service_Interface
{
public void createTasks(Set<Id> recordIds, Schema.SObjectType objectType)
{
//THE MAJOR DIFFERENCE IS HERE! Instead of using constructors to do dependency injection
//we are initializing our classes using the fflib_Application factory class.
fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
fflib_ISObjectSelector objectSelector = Application.selector.newInstance(objectType);
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance();
List<SObject> objectsThatNeedTasks = new List<SObject>();
if(objectSelector instanceof Task_Selector_Interface){
System.debug('Selector an instance of tsi');
Task_Selector_Interface taskFieldSelector = (Task_Selector_Interface)objectSelector;
objectsThatNeedTasks = taskFieldSelector.selectRecordsForTasks();
}
else{
System.debug('Selector not an instance of tsi');
objectsThatNeedTasks = objectSelector.selectSObjectsById(recordIds);
}
if(objectDomain instanceof Task_Creator_Interface){
System.debug('Domain an instance of tci');
Task_Creator_Interface taskCreator = (Task_Creator_Interface)objectDomain;
taskCreator.createTasks(objectsThatNeedTasks, unitOfWork);
}
try{
unitOfWork.commitWork();
}
catch(Exception e){
throw e;
}
}
}
public with sharing class Task_Service_Impl_Test
{
@IsTest
private static void createTasks_CasesSuccess_UnitTest(){
Id mockCaseId = fflib_IDGenerator.generate(Case.SObjectType);
Set<Id> caseIds = new Set<Id>{mockCaseId};
List<Case> caseList = new List<Case>{new Case(Id = mockCaseId, Subject = 'Hi', Status = 'New', Origin = 'Email')};
fflib_ApexMocks mocks = new fflib_ApexMocks();
fflib_SObjectUnitOfWork mockUOW = (fflib_SObjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);
Cases mockCaseDomain = (Cases) mocks.mock(Cases.class);
Case_Selector mockCaseSelector = (Case_Selector) mocks.mock(Case_Selector.class);
mocks.startStubbing();
mocks.when(mockCaseSelector.sObjectType()).thenReturn(Case.SObjectType);
mocks.when(mockCaseSelector.selectSObjectsById(caseIds)).thenReturn(caseList);
mocks.when(mockCaseSelector.selectRecordsForTasks()).thenReturn(caseList);
mocks.when(mockCaseDomain.sObjectType()).thenReturn(Case.SObjectType);
((fflib_SObjectUnitOfWork)mocks.doThrowWhen(new DmlException(), mockUOW)).commitWork();
mocks.stopStubbing();
//THE MAJOR DIFFERENCE IS HERE! Instead of dependency injection we are using our
//the setMock method in the fflib_Application class to set our mock class for unit tests.
Application.UOW.setMock(mockUOW);
Application.domain.setMock(mockCaseDomain);
Application.selector.setMock(mockCaseSelector);
try{
Test.startTest();
Task_Service.createTasks(caseIds, Case.SObjectType);
Test.stopTest();
}
catch(Exception e){
System.assert(e instanceof DmlException);
}
((Cases)mocks.verify(mockCaseDomain, mocks.never().description('This method was called but it shouldn\'t have
been'))).handleAfterInsert();
((Cases)mocks.verify(mockCaseDomain)).createTasks(caseList, mockUOW);
}
}
As you can see, in this example we no longer leverage dependency injection to get our mock unit tests up and running. Instead we use the setMock method available on all of the inner factory classes in the fflib_Application class to setup our mock class for our unit test. It makes things a bit easier in the long run.
Now that we’ve seen how to setup a mock class with or without Apex Common, let’s figure out all the cool things Apex Mocks has to allow you to assert your logic operated in the way you anticipated it would! We’ll start with a section on what stubbing is below!
What is Stubbing and how to Stub
Soooooo, what exactly is stubbing? Basically it’s the act of providing fake return responses for a mocked class’s methods and it’s super extra important when setting up your unit tests because without them, well frankly nothing is gonna work. So let’s check out how to do stubbing below:
Id mockCaseId = fflib_IDGenerator.generate(Case.SObjectType);
List<Case> caseList = new List<Case>{new Case(Id = mockCaseId, Subject = 'Hi', Status = 'New', Origin = 'Email')};
//Creating the mock/fake version of our case selector class
fflib_ApexMocks mocks = new fflib_ApexMocks();
Case_Selector mockCaseSelector = (Case_Selector) mocks.mock(Case_Selector.class);
//Here is where we start stubbing our fake return values from our methods that will be called by the actual class we are testing.
//You need to initialize our mock stubbing by using the mocks.startStubbing method.
mocks.startStubbing();
mocks.when(mockCaseSelector.sObjectType()).thenReturn(Case.SObjectType);
mocks.when(mockCaseSelector.selectSObjectsById(caseIds)).thenReturn(caseList);
//This is basically saying, "hey, when the method we're testing calls the selectRecordsForTasks method on our mocked case selector
//please always return this caseList."
mocks.when(mockCaseSelector.selectRecordsForTasks()).thenReturn(caseList);
mocks.stopStubbing();
//Don't forget to stop stubbing!! Very importante!
//Make sure to do your stubbing before sending your mock class to your application factory class!!!
Application.selector.setMock(mockCaseSelector);
//Doing the test of the actual method that will call those mocked class's stubbed methods above. Make sure to create the mock classes and the stubbed
//method responses for your classes before doing this test!
Test.startTest();
Task_Service.createTasks(caseIds, Case.SObjectType);
Test.stopTest();
Alright, let’s go over this a bit (and if you didn’t read the comments in the code outlined above please do!). The first thing you should know is that you should only be creating stubbed methods for methods that are actually called by the real method you are actually testing (in our case, the Task_Service.createTasks method). Now that we’ve clarified that, let’s break down one of our stubs that we built out for the methods in our mocked class. Take this stub ‘mocks.when(mockCaseSelector.selectRecordsForTasks()).thenReturn(caseList);’, what we are efficitively saying here is, when our mockCaseSelector’s selectRecordsForTasks method is called, let’s always return the caseList value. Not too complicated but maybe confusing if you’ve never seen the syntax before.
Hopefully this has helped explain the basic concept of stubbing alright, there’s more we can do with stubbing in regards to throwing fake errors, so be sure to check out the, “How to mock exceptions being thrown” section below for more information on that subject.
How to verify the class you’re actually testing appropriately called your mock class methods
If you’ve never done mocking before this might seem super weird, instead of using asserts, we need another way to verify that our code functioned as anticipated with our fake classes and return values. So what exactly do you verify/assert in your test? What we’re gonna end up doing is verify that we did indeed call the fake methods with the parameters we anticipated (or that we didn’t call them at all). So let’s take a look at how to use the verify method in the fflib_ApexMocks class to verify a method we intended to call, did indeed get called:
Id mockCaseId = fflib_IDGenerator.generate(Case.SObjectType);
//Creating a set of ids that we pass to our methods.
Set<Id> caseIds = new Set<Id>{mockCaseId};
//Creating the list of cases we'll return from our selector method
List<Case> caseList = new List<Case>{new Case(Id = mockCaseId, Subject = 'Hi', Status = 'New', Origin = 'Email')};
//Creating our mock class representations by using the ApexMocks class's mock method
//and passing it the appropriate class type.
fflib_ApexMocks mocks = new fflib_ApexMocks();
fflib_SObjectUnitOfWork mockUOW = (fflib_SObjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);
Cases mockCaseDomain = (Cases) mocks.mock(Cases.class);
mocks.startStubbing();
mocks.when(mockCaseDomain.sObjectType()).thenReturn(Case.SObjectType);
mocks.stopStubbing();
Application.UOW.setMock(mockUOW);
Application.domain.setMock(mockCaseDomain);
Test.startTest();
//Calling the method we're actually testing (this is a real method call)
Task_Service.createTasks(caseIds, Case.SObjectType);
Test.stopTest();
//THIS IS IT!!! HERE IS WHERE WE ARE VERIFYING THAT WE ARE CALLING THE CASE DOMAIN CREATE TASKS CLASS ONCE!!!
((Cases)mocks.verify(mockCaseDomain, 1)).createTasks(caseList, mockUOW);
On the very last line of the above code we are using the verify method to ensure that the createTasks method in the Cases class was called exactly one time with the caseList and mockUOW values passed to it. You might be looking at this and thinking, “But why Matt? Why would I actually care to this?”. The answer is pretty simple. Even though your code in those classes isn’t running, you still want to (really need to) verify that your code still decided to call it (or not call it), that you code didn’t call the method more times than you were anticipating, etc. Remember, the major purpose of a unit test is to test that the logic in your class is working how you anticipate it to. Verifying that methods were called correctly is extremely important. If your class calls the wrong methods at the wrong time, super ultra terrible consequences could end up taking place.
How to verify your data was altered by your class when it was sent to a mocked class (Matchers)
Whoooo that’s a long title, lol. Here’s the dealio, at least half the time the data you pass into the class you are actually testing (the one class we aren’t mocking) it will altered and then sent in to another mocked class’s method to be further processed. Chances are you’d like to verify that data was altered prior to sending it to your mocked classes method. Never fear my good ole pals, the fflib_Match class is here to help you do just that! Let’s take a look at how we can use matchers to identify what exactly was passed into our mocked class’s methods!
Id mockCaseId = fflib_IDGenerator.generate(Case.SObjectType);
List<Case> caseList = new List<Case>{new Case(
Id = mockCaseId,
Subject = 'Hi',
Status = 'New',
Origin = 'Email')};
fflib_ApexMocks mocks = new fflib_ApexMocks();
fflib_SObjectUnitOfWork mockUOW = (fflib_SObjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);
Application.UOW.setMock(mockUOW);
Test.startTest();
//Calling the method we're actually testing (this is a real method call). IN THIS METHOD WE ARE CHANGING THE CASE SUBJECTS FROM 'Hi' to 'Bye'
new Cases().changeSubject(caseList);
Test.stopTest();
//Here we are using the fflib_Match class to create a new sobject to match against to verify the subject field was actually changed in the above method //call.
List<Case> caseMatchingList = (List<Case>)fflib_Match.sObjectsWith(new List<Map<Schema.SObjectField, Object>>{new Map<SObjectField, Object>{
Case.Id => mockCaseId,
Case.Subject => 'Bye',
Case.Status => 'New',
Case.Origin => 'Email'}
});
//Here we are verifying that our unit of works registerDirty method was indeed called with the updated data we expected by using a matcher. This should //return true pending your code actually did update the cases prior to calling this method as you intended.
((fflib_ISObjectUnitOfWork)mocks.verify(mockUOW)).registerDirty(caseMatchingList);
//This also works (confusing right)... but it proves a lot less, it simply proves your method was called with that list of cases, but it doesn't prove it //updated those cases prior to calling it.
((fflib_ISObjectUnitOfWork)mocks.verify(mockUOW,1)).registerDirty(caseList);
Alright alright alright, you might be looking at this and being like, “But bruh why make a matcher at all to get this done? Why not just keep it simple and make another list of cases to represent the changes”. Fair enough question hombre, here’s the simple answer… it’s not gonna work, trust me, lol. Basically ApexMocks needs a way to internally check these events actually occurred and that matcher we setup is the key to doing that. If you wanna dive deep into how it works feel free, but that is a whole different subject we’re not gonna get into the weeds of.
So we can see, I think, that this is extremely useful in terms of verifying our logic occurred in the right order and updated our data as anticipated before calling the method in one of our mocked classes. Pretty cool, makes our unit testing that much more accurate.
There are a ton more methods in the fflib_Match class that you can leverage to do virtually whatever you can think of, there’s a list of those near the bottom of this page. Definitely check them out.
How to mock exceptions being thrown (this is THA BEST!!)
If you don’t care about time savings for test runs to get your code into production, you should pick up mocking to allow for easier exception testing at the very least. IT IS SO MUCH EASIER TO VERIFY EXCEPTIONS!!!! A magical thing that honestly can’t be beat. Tons of errors that you should rightfully handle in a ton of situations (like record locking errors) are borderline impossible to test with real data… this makes it possible. Some people out there might be like, “Hah, but I don’t even catch my exceptions why should I care?”. To that I say, time to change my guy, lol, you need to catch those bad boiz, trust me. Anywhooo, let’s get right to it! How do we mock our exceptions to make testing exceptions so simple and so glorious? Let’s checkout the code below:
Here’s an example of mocking an error when the method returns nothing:
fflib_ApexMocks mocks = new fflib_ApexMocks();
fflib_SObjectUnitOfWork mockUOW = (fflib_SObjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);
mocks.startStubbing();
//This right here is what's doing the mock error throwing!! Basically we're saying, when the commitWork method in our
//mockUOW class is called by the method we are truly testing, please throw a new DMLException.
((fflib_SObjectUnitOfWork)mocks.doThrowWhen(new DmlException(), mockUOW)).commitWork();
mocks.stopStubbing();
Application.UOW.setMock(mockUOW);
try{
Test.startTest();
//Calling the method we're actually testing (this is a real method call)
Task_Service.createTasks(caseIds, Case.SObjectType);
Test.stopTest();
}
catch(Exception e){
//Because we are throwing an exception in our stubs we need to wrap our real
//method call in a try catch and figure out whether or not it actually threw the
//exception we anticipated it throwing.
System.assert(e instanceof DmlException);
}
Here’s a similar example but with a method that returns data:
fflib_ApexMocks mocks = new fflib_ApexMocks();
Case_Selector mockCaseSelector = (Case_Selector) mocks.mock(Case_Selector.class);
mocks.startStubbing();
//These two methods must be stubbed for selectors if using apex common library
mocks.when(mockCaseSelector.sObjectType()).thenReturn(Case.SObjectType);
mocks.when(mockCaseSelector.selectSObjectsById(caseIds)).thenReturn(caseList);
//This right here is what's doing the mock error throwing!! Basically we're saying, when the selectRecordsForTasks method in our
//mockCaseSelector class is called by the method we are truly testing, please throw a new DMLException.
mocks.when(mockCaseSelector.selectRecordsForTasks()).thenThrow(new DmlException());
mocks.stopStubbing();
Application.selector.setMock(mockCaseSelector);
try{
Test.startTest();
//Calling the method we're actually testing (this is a real method call)
Task_Service.createTasks(caseIds, Case.SObjectType);
Test.stopTest();
}
catch(Exception e){
//Because we are throwing an exception in our stubs we need to wrap our real
//method call in a try catch and figure out whether or not it actually threw the
//exception we anticipated it throwing.
System.assert(e instanceof DmlException);
}
You can see that there’s a small difference between the two. For the method that actually returns data the chain of events is a bit easier to follow mocks.when(mockCaseSelector.selectRecordsForTasks()).thenThrow(new DmlException());. This basically is just saying, when we call the mock selector class’s selectRecordsForTasks class then we need to throw a new DMLException.
For the method that doesn’t return any data (a void method) the syntax is a little different ((fflib_SObjectUnitOfWork)mocks.doThrowWhen(new DmlException(), mockUOW)).commitWork(). You can see that instead of using the mocks.when(method).thenThrow(exception) setup we are now using the doThrowWhen method, which kinda combines the when(method).thenThrow(exception) into a single statement. You can also see it takes two parameters, the first is the exception you want to throw and the second is the mocked class you are attaching this exception throwing event to. Then you have to cast this mocks.doThrowWhen(exception, mockedClass) to the class type it will end up representing, wrap it in parenthesis and call the method you intend it to throw an error on. Weird setup, awesomely simple exception testing though.
Also worth noting, don’t forget to wrap methods you are truly testing (that will now throw an error) in a try catch in your test and in the catch block, assert that you are getting your exceptions throw.
How to test important method side-effects truly occured (Answering)
It is not at all uncommon to have parameters passed into a method by reference and for that method to update those parameters but not pass them back. Some people refer to this as method side effects. To truly do good unit tests however we should mock those side effects as well. That is where the fflib_Answer interface comes into play! Now, unlike most things I’ve written about in this series, there is a blog post by Eric Kintzer that already exists for this apex mocks topic that is excellent and well… frankly I don’t need to re-invent the wheel here. Please go check out how to do Answering with Apex Mocks here. .
How to check method call order
Sometimes when you are doing unit tests it’s extremely important to verify the order in which your methods were called. Maybe, depending on what data your method receives your mock class’s methods could be called in varying orders. It’s best to check that the code is actually doing that! If you do need to check for that in your code, no worries, that is why the fflib_InOrder class exists! Thankfully this is yet another area I really don’t need to cover again in this guide as a good blog post exists covering this topic as well! If you need to check method ordering in your unit testing, please check out the blog post here!
Apex Mocks Counters (Counting method calls and adding descriptions)
Counters counters counters… they’re one of the simplest things to learn thankfully and also very useful! Basically you might want to know how many times a method in a mock class was called by the class you are currently unit testing. It helps to verify the logic in your code is only calling classes in the way you intended for it to call it. The syntax is pretty simple so lets get to learninnnnnnnn.
When we are verifying method calls (as discussed in one of the above sections) we do the following:
What if we wanna check whether the code operated only once though? We can check pretty easy by passing a second parameter to the verify method like so:
The above code will verify that our method was not just called, but only called once. Pretty kewl right? But wait! There’s more! Let’s check out a bunch of different counter scenarios below.
How to verify a method was called at least x times:
Wow wow wow, so many options for so many scenarios, however I would be remiss if I didn’t also should you the description method call you can chain to basically give you useful verification failure statements. Let’s check that out right quick:
((fflib_ISObjectUnitOfWork)mocks.verify(mockUOW, mocks.times(2).description('Whoa champ, you didnt call this dirty method twice, try again broseph'))).registerDirty(caseMatchingList);
If your verification fails (and in turn your test class fails) you will now get that wonderful description to help you pinpoint the cause of your test failure a bit easier. That’s pretty awesome opposum.
If we couldn’t produce fake records to pass to our mocked class methods to handle, honestly this whole setup would only be kinda good because we’d always be forced to do real DML transactions in our tests to create data, and if you didn’t know the reason tests can take sooooooooooooo long is because of DML transactions. They take a crazy amount of time in comparison to everything else in your code (unless you’ve got quadruple inner for loops in your codebase somewhere… which you should address like right now, please stop reading this and fix it). So let’s figure out how to make fake data using the Apex Mocks Library.
The first class on our list in the Apex Mocks library to check out is the fflib_IDGenerator class. Inside this class there is a single method, generate(Schema.SObjectType sobjectType) and its purpose is to take an SObjectType and return an appropriate Id for it. This is super useful because if you need to fake that a record already exists in the system, or that a record is parented to another record, you need an Id. You’ll likely use this method a ton.
Example method call to get a fake (but legal) Id:
Id mockCaseId = fflib_IDGenerator.generate(Case.SObjectType);
That will return to you a legal caseId! Pretty useful.
The next class on our list to cover here is the fflib_ApexMocksUtils class. This class has three accessible methods and they allow you to make in memory relationships between fake records and to setup formula field values for a fake record. Let’s take a look at some examples below.
//Example from: https://salesforce.stackexchange.com/questions/315832/mocking-related-objects-using-fflib
Opportunity opportunityMock1 = new Opportunity(Id = fflib_IDGenerator.generate(Opportunity.SObjectType));
//This basically creates a list of opportunities with child opportunity line items in the list in the OpportunityLineItems field
List<Opportunity> opportunitiesWithProductsMock = (List<Opportunity>) fflib_ApexMocksUtils.makeRelationship(
List<Opportunity>.class,
new List<Opportunity>{
opportunityMock1
},
OpportunityLineItem.OpportunityId,
new List<List<OpportunityLineItem>>{
new List<OpportunityLineItem>{
new OpportunityLineItem(Id = fflib_IDGenerator.generate(OpportunityLineItem.SObjectType)),
new OpportunityLineItem(Id = fflib_IDGenerator.generate(OpportunityLineItem.SObjectType))
}
}
);
//This is basically the same method as above, but you can pass in String to determine the object and relationship instead of types. I would
//use the method above unless this one was 110% necessary.
List<Opportunity> opportunitiesWithProductsMock = (List<Opportunity>) fflib_ApexMocksUtils.makeRelationship(
'Opportunity',
'OpportunityLineItem',
new List<Opportunity>{
opportunityMock1
},
'OpportunityId',
new List<List<OpportunityLineItem>>{
new List<OpportunityLineItem>{
new OpportunityLineItem(Id = fflib_IDGenerator.generate(OpportunityLineItem.SObjectType)),
new OpportunityLineItem(Id = fflib_IDGenerator.generate(OpportunityLineItem.SObjectType))
}
}
);
//Code from: https://github.com/apex-enterprise-patterns/fflib-apex-mocks
Account acc = new Account();
Integer mockFormulaResult = 10;
//This will allow you to set read only fields on your objects, such as the formula field below. Pretty useful!
acc = (Account)fflib_ApexMocksUtils.setReadOnlyFields(
acc,
Account.class,
new Map<SObjectField, Object> {Account.Your_Formula_Field__c => mockFormulaResult}
);
Example Apex Mocks Classes
Task_Service_Impl_Test – Apex Mocks test class example that uses the Apex Common library.
Next Section
That’s it! You’re done! Thank god… I’m tired of writing this thing. You can go back to the home page here.
The Apex Common Library is an open source library originally created by Andy Fawcett when he was the CTO of FinancialForce and currently upkept by many community members, but most notably John Daniel. Aside from its origins and the fflib_ in the class names, it is no longer linked to FinancialForce in any way.
The library was originally created because implementing the Separation of Concerns Design Principle is difficult no matter what tech stack you’re working in. For Salesforce, the Apex Common Library was built to simplify the process of implementing Separation of Concerns as well as assist in managing DML transactions, creating high quality unit tests (you need the Apex Mocks library to assist with this) and enforcing coding and security best practices. If you want an exceptionally clean, understandable and flexible code base, the Apex Common library will greatly assist you in those endeavors.
Does The Apex Common Library Implement Separation of Concerns for me Automatically?
Unfortunately it’s not that simple. This library doesn’t just automatically do this for you, no library could, but what it does is give you the tools to easily implement this design principle in your respective Salesforce Org or Managed Package. Though there are many more classes in the Apex Common Library, there are four major classes to familiarize yourself with to be able to implement this, four object oriented programming concepts and three major design patterns. Additionally it’s beneficial if you understand the difference between a Unit Test and an Integration Test. We’ll go over all of these things below.
The Four Major Classes
1) fflib_Application.cls – This Application class acts as a way to easily implement the Factory pattern for building the different layers when running your respective applications within your org (or managed package). When I say “Application” for an org based implementation this could mean a lot of things, but think of it as a grouping of code that represents a specific section of your org. Maybe you have a service desk in your org, that service desk could be represented as an “Application”. This class and the factory pattern are also what makes the Apex Mocks Library work, without implementing it, Apex Mocks will not work.
2) fflib_SObjectDomain.cls – This houses the base class that all Domain classes you create will extend. The many methods within this class serve to make your life considerably easier when building your domain classes, for each object that requires a trigger, out. You can check out my Apex Common Domain Layer Implementation Guide for more details.
3) fflib_SObjectSelector.cls – This houses the base class that all Selector classes you create will extend. The many methods within this class will serve to make your life a ton easier when implementing a selector classes for your various objects in your org. You can check out my Apex Common Selector Layer Implementation Guide
1) Inheritance) – When a class inherits (or extends) another class and the sub class gets access to all of its publicly accessible methods and variables.
2) Polymorphism) – When a class uses overloaded methods or overrides an inherited classes methods.
3) Encapsulation) – Only publishing (or making public) methods and class variables that are needed for other classes to use it.
4) Interfaces) – An interface is a contract between it and a class that implements it to make sure the class has specific method signatures implemented.
The factory method pattern allows you to create objects (or instantiate classes) without having to specify the exact class that is being created. Say for instance you have a service class that can be called by multiple object types and those object types each have their own object specific implementation for creating tasks for them. Instead of writing a ton of if else’s in the service class to determine which class should be constructed, you could leverage the factory method pattern and massively reduce your code.
Why is it Useful?
It’s useful because if used appropriately it can massively reduce the amount of code in your codebase and will allow for a much more dynamic and flexible implementation. The amount of flexibility when used appropriately can be absolutely astounding. Let’s take a look at two different examples. One not using the factory pattern and another that does!
Creating Tasks for Different Objects (No Factory Pattern):
public with sharing class Task_Service_Impl
{
//This method calls the task creators for each object type
public void createTasks(Set<Id> recordIds, Schema.SObjectType objectType)
{
if(objectType == Account.getSObjectType()){
//Accounts (and the other object types below) is not the same as the regular
//Account object.
//This is further explained in the domain layer section of this wiki.
//Basically you name your domain class
//the plural version of the object the domain represents
new Accounts().createTasks(recordIds);
}
else if(objectType == Case.getSObjectType()){
new Cases().createTasks(recordIds);
}
else if(objectType == Opportunity.getSObjectType()){
new Opportunities().createTasks(recordIds);
}
else if(objectType == Taco__c.getSObjectType()){
new Tacos().createTasks(recordIds);
}
else if(objectType == Chocolate__c.getSObjectType()){
new Chocolates().createTasks(recordIds);
}
//etc etc for each object could go on for decades
}
}
Creating Tasks for Different Objects (Factory Pattern):
//The task service that anywhere can call and it will operate as expected with super minimal logic
public with sharing class Task_Service_Impl implements Task_Service_Interface
{
//This method calls the task creators for each object type
public void createTasks(Set<Id> recordIds)
{
//Using our Application class we are able to instantiate new instances of
//domain classes based on the recordIds we pass
//the newInstance method.
//We cover the fflib_Application class and how it uses the factory pattern a
//ton more in the next section.
fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
if(objectDomain instanceof Task_Creator_Interface){
Task_Creator_Interface taskCreator = (Task_Creator_Interface)objectDomain;
taskCreator.createTasks(recordIds);
}
}
}
Right now you might be kinda shook… at least I know I was the first time I implemented it, lol. How on Earth is this possible?? How can so much code be reduced to so little? The first thing we do is instantiate a new domain class (domain classes are basically just kinda fancy trigger handlers, but more on that later) using our Application class (our factory class) simply by sending it record ids. The Application factory class generates the object specific Domain class by determining the set of recordIds object type using the Id.getSObjectType() method that Salesforce makes available in Apex. Then by implementing the Task_Creator_Interface interface on each of the objects domain classes I’m guaranteeing that if something is an instance of the Task_Creator_Interface they will have a method called createTasks! Depending on the use case this can take hundreds of lines of code and reduce it to almost nothing. It also helps in the Separation of Concerns area by making our services much more abstract. It delegates logic more to their respective services or domains instead of somewhere the logic probably doesn’t belong.
Where does it fit into Separation of Concerns?
Basically it reduces your need to declare concreate class/object types in your code in many places and it allows you to create extremely flexible and abstract services (more on this in the implementing the service layer with apex common section). Again take the example of task creation, maybe you have 15 controller classes (classes connected to a UI) in your org making tasks for 15 different objects and each object has a different task implementation, but you want to move all that task creation logic into a singular service class that anywhere can call for any object at any time. The factory method pattern is quite literally built for this scenario. In fact I have two examples below demonstrating it! One using a simple factory class to create tasks and one using the fflib_Application class to do the same thing.
The following code example in the repo is an example of how the factory pattern could work in a real world Salesforce implementation to allow for tasks to be created on multiple objects using a different implementation for each object.
Quality question… I mean honestly wtf is this thing? Lol, sorry, let’s figure it out together. The fflib_Application class is around for two primary purposes. The first is to allow you an extremely abstract way of creating new instances of your unit of work, service layer, domain layer and selector layer in the Apex Common Library through the use of the factory pattern. The second is that implementing this application class is imperative if you want to leverage the Apex Mocks unit testing library. It depends on this Application Factory being implemented.
Most importantly though, if you understand how interfaces, inheritance and polymorphism work implementing this class allows you to write extremely abstract Salesforce implementations, which we’ll discuss more in sections below
Why is this class used?
Ok, if we ignore the fact that this is required for us to use the Apex Mocks library, understanding the power behind this class requires us to take a step back and formulate a real world Salesforce use case for implementing it… hopefully the following one will be easy for everyone to understand.
Say for instance I have a decent sized Salesforce instance and our business has a use case to create tasks across multiple objects and the logic for creating those tasks are unique to every single object. Maybe on the Account object we create three new tasks every single time we create an account and on the Contact object we create two tasks every single time a record is created or updated in a particular way and we ideally want to call this logic on the fly from anywhere in our system.
No matter what we should probably place the task creation logic in our domain layer because it’s relevant to each individual object, but pretend for a second that we have like 20 different objects we need this kind of functionality on. Maybe we need the executed logic in an abstract “task creator” button that can be placed on any lightning app builder page and maybe some overnight batch jobs need to execute the logic too.
Well… what do we do? Let’s just take the abstract “Task Creator” button we might want to place on any object in our system. We could call each individual domain layer class’s task creation logic in the code based on the object we were on (code example below), but that logic tree could get massive and it’s not super ideal.
Task Service example with object logic tree
public with sharing class Task_Service_Impl
{
//This method calls the task creators for each object type
public void createTasks(Set recordIds, Schema.SObjectType objectType)
{
if(objectType == Account.getSObjectType()){
new Accounts().createTasks(recordIds);
}
else if(objectType == Case.getSObjectType()){
new Cases().createTasks(recordIds);
}
else if(objectType == Opportunity.getSObjectType()){
new Opportunities().createTasks(recordIds);
}
else if(objectType == Taco__c.getSObjectType()){
new Tacos().createTasks(recordIds);
}
else if(objectType == Chocolate__c.getSObjectType()){
new Chocolates().createTasks(recordIds);
}
//etc etc for each object could go on for decades
}
}
Maybe… just maybe there’s an easier way. This is where the factory pattern and the fflib_Application class come in handy. Through the use of the factory pattern we can create an abstract Task Service that can (based on a set of records we pass to it) select the right business logic to execute in each domain layer dynamically.
//Creation of the Application factory class
public with sharing class Application
{
public static final fflib_Application.ServiceFactory service =
new fflib_Application.ServiceFactory(
new Map<Type, Type>{
Task_Service_Interface.class => Task_Service_Impl.class}
);
public static final fflib_Application.DomainFactory domain =
new fflib_Application.DomainFactory(
Application.selector,
new Map<SObjectType, Type>{Case.SObjectType => Cases.Constructor.class,
Opportunity.SObjectType => Opportunities.Constructor.class,
Account.SObjectType => Accounts.Constructor.class,
Taco__c.SObjectType => Tacos.Constructor.class,
Chocolate__c.SObjectType => Chocolates.Constructor.class}
);
}
//The task service that anywhere can call and it will operate as expected with super minimal logic
public with sharing class Task_Service_Impl implements Task_Service_Interface
{
//This method calls the task creators for each object type
public void createTasks(Set<Id> recordIds, Schema.SObjectType objectType)
{
fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
if(objectDomain instanceof Task_Creator_Interface){
Task_Creator_Interface taskCreator = (Task_Creator_Interface)objectDomain;
taskCreator.createTasks(recordIds);
}
}
}
You might be lookin at the two code examples right now like wuttttttttt how thooooo?? And I just wanna say, I fully understand that. The first time I saw this implemented I thought the same thing, but it’s a pretty magical thing. Thanks to the newInstance() methods on the fflib_Application class and the Task_Creator_Interface we’ve implemented on the domain classes, you can dynamically generate the correct domain when the code runs and call the create tasks method. Pretty wyld right? Also if you’re thinkin, “Yea that’s kinda nifty Matt, but you had to create this Application class and that’s a bunch of extra code.” you need to step back even farther. This Application factory can be leveraged ANYWHERE IN YOUR ENTIRE CODEBASE! Not just locally in your service class. If you need to implement something similar to automatically generate opportunities or Accounts or something from tons of different objects you can leverage this exact same Application class there. In the long run, this ends up being wayyyyyyyyy less code.
If you want a ton more in depth explanation on this, please watch the tutorial video. We code a live example together so I can explain this concept. It’s certainly not easy to grasp at first glance.
fflib_Application inner classes and methods cheat sheet
Inside the fflib_Application class there are four classes that represent factories for the your unit of work, service layer, domain layer and selector layer.
//The constructor for this class requires you to pass a list of SObject types in the dependency order. So in this instance Accounts would always be inserted before your Contacts and Contacts before Cases, etc.
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
After creating this unit of work variable above ^ in your Application class example here there are four important new instance methods you can leverage to generate a new unit of work:
1) newInstance() – This creates a new instance of the unit of work using the SObjectType list passed in the constructor.
newInstance() Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance();
}
}
newInstance(fflib_SObjectUnitOfWork.IDML dml) Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
//Custom IDML implementation
public with sharing class IDML_Example implements fflib_SObjectUnitOfWork.IDML
{
void dmlInsert(List<SObject> objList){
//custom insert logic here
}
void dmlUpdate(List<SObject> objList){
//custom update logic here
}
void dmlDelete(List<SObject> objList){
//custom delete logic here
}
void eventPublish(List<SObject> objList){
//custom event publishing logic here
}
void emptyRecycleBin(List<SObject> objList){
//custom empty recycle bin logic here
}
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance(new IDML_Example());
}
}
3) newInstance(List <SObjectType> objectTypes) – This creates a new instance of the unit of work and overwrites the SObject type list passed in the constructor so you can have a custom order if you need it.
newInstance(List <SObjectType> objectTypes) Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance(new List<SObjectType>{
Case.SObjectType,
Account.SObjectType,
Task.SObjectType,
Contact.SObjectType,
});
}
}
newInstance(List objectTypes, fflib_SObjectUnitOfWork.IDML dml) Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
//Custom IDML implementation
public with sharing class IDML_Example implements fflib_SObjectUnitOfWork.IDML
{
void dmlInsert(List<SObject> objList){
//custom insert logic here
}
void dmlUpdate(List<SObject> objList){
//custom update logic here
}
void dmlDelete(List<SObject> objList){
//custom delete logic here
}
void eventPublish(List<SObject> objList){
//custom event publishing logic here
}
void emptyRecycleBin(List<SObject> objList){
//custom empty recycle bin logic here
}
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance(new List<SObjectType>{
Case.SObjectType,
Account.SObjectType,
Task.SObjectType,
Contact.SObjectType,
}, new IDML_Example());
}
}
//This allows us to create a factory for instantiating service classes. You send it the interface for your service class
//and it will return the correct service layer class
//Exmaple initialization: Object objectService = Application.service.newInstance(Task_Service_Interface.class);
public static final fflib_Application.ServiceFactory service =
new fflib_Application.ServiceFactory(new Map<Type, Type>{
SObject_SharingService_Interface.class => SObject_SharingService_Impl.class
});
After creating this service variable above ^ in your Application class example here there is one important new instance method you can leverage to generate a new service class instance:
1) newInstance(Type serviceInterfaceType) – This method sends back an instance of your service implementation class based on the interface you send in to it.
newInstance(Type serviceInterfaceType) Example method call:
//This is using the service variable above that we would've created in our Application class
Application.service.newInstance(Task_Service_Interface.class);
//This allows us to create a factory for instantiating selector classes. You send it an object type and it sends
//you the corresponding selectory layer class.
//Example initialization: fflib_ISObjectSelector objectSelector = Application.selector.newInstance(objectType);
public static final fflib_Application.SelectorFactory selector =
new fflib_Application.SelectorFactory(
new Map<SObjectType, Type>{
Case.SObjectType => Case_Selector.class,
Contact.SObjectType => Contact_Selector.class,
Task.SObjectType => Task_Selector.class}
);
After creating this selector variable above ^ in your Application class example here there are three important methods you can leverage to generate a new selector class instance:
1) newInstance(SObjectType sObjectType) – This method will generate a new instance of the selector based on the object type passed to it. So for instance if you have an Opportunity_Selector class and pass Opportunity.SObjectType to the newInstance method you will get back your Opportunity_Selector class (pending you have configured it this way in your Application class map passed to the class.
newInstance(SObjectType sObjectType) Example method call:
//This is using the selector variable above that we would've created in our Application class
Application.selector.newInstance(Case.SObjectType);
2) selectById(Set<Id> recordIds) – This method, based on the ids you pass will automatically call your registered selector layer class for the set of ids object type. It will then call the selectSObjectById method that all Selector classes must implement and return a list of sObjects to you.
selectById(Set<Id> recordIds) Example method call:
//This is using the selector variable above that we would've created in our Application class
Application.selector.selectById(accountIdSet);
3) selectByRelationship(List<sObject> relatedRecords, SObjectField relationshipField) – This method, based on the relatedRecords and the relationship field passed to it will generate a selector layer class for the object type in the relationship field. So say you were querying the Contact object and you wanted an Account Selector class, you could call this method it, pass the list of contacts you queried for and the AccountId field to have an Account Selector returned to you (pending that selector was configured in the Application show above in this wiki article).
selectByRelationship(List<sObject> relatedRecords, SObjectField relationshipField) Example method call:
//This is using the selector variable above that we would've created in our Application class
Application.selector.selectByRelationship(contactList, Contact.AccountId);
//This allows you to create a factory for instantiating domain classes. You can send it a set of record ids and
//you'll get the corresponding domain layer.
//Example initialization: fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
public static final fflib_Application.DomainFactory domain =
new fflib_Application.DomainFactory(
Application.selector,
new Map<SObjectType, Type>{Case.SObjectType => Cases.Constructor.class,
Contact.SObjectType => Contacts.Constructor.class}
);
After creating this domain variable above ^ in your Application class example here there are three important methods you can leverage to generate a new domain class instance:
1) newInstance(Set <Id> recordIds) – This method creates a new instance of your domain class based off the object type in the set of ids you pass it.
newInstance(Set<Id> recordIds) Example method call:
Application.domain.newInstance(accountIdSet);
2) newInstance(List<sObject> records) – This method creates a new instance of your domain class based off the object type in the list of records you pass it.
newInstance(List<sObject> records) Example method call:
In every factory class inside the fflib_Application class there is a setMock method. These methods are used to pass in mock/fake versions of your classes for unit testing purposes. Make sure to leverage this method if you are planning to do unit testing. Leveraging this method eliminates the need to use dependency injection in your classes to allow for mocking. There are examples of how to leverage this method in the Implementing Mock Unit Testing with Apex Mocks section of this wiki.
A Unit of Work, “Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems”.
The goal of the unit of work pattern is to simplify DML in your code and only commit changes to the database/objects when it’s truly time to commit. Considering the many limits around DML in Salesforce, it’s important to employ this pattern in your org in some way. It’s also important to note that this, “maintains a list of objects affected by a business transaction”, which indicates that the UOW pattern should be prevalent in your service layer (The service layer houses business logic).
The UOW pattern also ensures we don’t have data inconsistencies in our Salesforce instance. It does this by only committing work when all the DML operations complete successfully. It rolls back our transactions when any DML fails in our unit of work.
Benefits of the using the Unit of Work Pattern in Salesforce
There are several, but here are the biggest of them all… massive amounts of code reduction, having consistency with your DML transactions, doing the minimal DML statements feasible (bulkification) and DML mocking in unit tests. Let’s figure out how we reduce the code and make it more consistent first.
The Code Reduction and Consistency
Think about all the places in your codebase where you insert records, error handle the inserting of your records and manage the transactional state of your records (Savepoints). Maybe if your org is new there’s not a ton happening yet, but as it grows the amount of code dealing with that can become enormous and, even worse, inconsistent. I’ve worked in 12 year old orgs that had 8000+ lines of code just dedicated to inserting records throughout the system and with every dev who wrote the code a new variety of transaction management took place, different error handling (or none at all), etc.
Code Bulkification
The unit of work pattern also helps a great deal with code bulkification. It encourages you to to finish creating and modifying 100% of your records in your transaction prior to actually committing them (doing the dml transactions) to the database (objects). It makes sure that you are doing that absolute minimal transactions necessary to be successful. For instance, maybe for some reason in your code you are updating cases in one method, and when you’re done you call another method and it updates those same cases… why do that? You could register all those updates and update all those cases at once with one DML statement. Whether you realize it at the time or not, even dml statement counts… use them sparingly.
DML Mocking for Unit Tests
If you’re not sure what mocking and unit test are, then definitely check out my section on that in the wiki here. Basically, in an ideal scenario you would like to do unit testing, but unit testing depends on you having the ability to mock classes for you tests (basically creating fake versions of your class you have complete control over in your tests). Creating this layer that handles your dml transactions allows you to mock that layer in your classes when doing unit tests… If this is confusing, no worries, we’ll discuss it a bunch more later in the last three sections of this wiki.
It is a foundation built to allow you to leverage the unit of work design pattern from within Salesforce. Basically this class is designed to hold your database operations (insert, update, etc) in memory until you are ready to do all of your database transactions in one big transaction. It also handles savepoint rollbacks to ensure data consistentcy. For instance, if you are inserting Opportunities with Quotes in the same database (DML) transaction, chances are you don’t wanna insert those Opportunities if your Quotes fail to insert. The unit of work class is setup to automatically handle that transaction management and roll back if anything fails.
If also follows bulkification best practices to make your life even easier dealing with DML transactions.
Why is this class used?
This class is utilized so that you can have super fine control over your database transactions and so that you only do DML transactions when every single record is prepped and ready to be inserted, updated, etc.
Additionally there are two reasons it is important to leverage this class (or a class like it): 1) To allow for DML mocking in your test classes. 2) To massively reduce duplicate code for DML transactions in your org. 3) To make DML transaction management consistent
Think about those last two for a second… how many lines of code in your org insert, update, upsert (etc) records in your org? Then think about how much code also error handles those transaction and (if you’re doing things right) how much code goes into savepoint rollbacks. That all adds up over time to a ton of code. This class houses it all in one centralized apex class. You’ll never have to re-write all that logic again.
How to Register a Callback method for an Apex Commons UOW
The following code example shows you how to setup a callback method for your units of work using the fflib_SObjectUnitOfWork.IDoWork interface, should you need them.
public inherited sharing class HelpDeskAppPostCommitLogic implements fflib_SObjectUnitOfWork.IDoWork{
List<Task> taskList;
public HelpDeskAppPostCommitLogic(List<Task> taskList){
this.taskList = taskList;
}
public void doWork(){
//write callback code here
}
}
The code below shows you how to actually make sure your unit of work calls your callback method.
fflib_ISObjectUnitOfWork uow = Helpdesk_Application.helpDeskUOW.newInstance();
//code to create some tasks
uow.registerNew(newTasks);
uow.registerWork(new HelpDeskAppPostCommitLogic(newTasks));
uow.commitWork();
Apex Commons Unit of Work Limitations
1) Records within the same object that have lookups to each other are currently not supported. For example, if the Account object has a Lookup to itself, that relationship cannot be registered.
2) You cannot do all or none false database transactions without creating a custom IDML implementation.
Database.insert(acctList, false);
3) To send emails with the Apex Commons UOW you must utilize the special registerEmail method.
4) It does not manage FLS and CRUD without implementing a custom class that implements the IDML interface and does that for you.
How and When to use the fflib_SObjectUnitOfWork IDML Interface
If your unit of work needs a custom implementation for inserting, updating, deleting, etc that is not supported by the SimpleDML inner class then you are gonna want to create a new class that implements the fflib_SObjectUnitOfWork.IDML interface. After you create that class if you were using the Application factory you would instantiate your unit of work like so Application.uow.newInstance(new customIDMLClass()); otherwise you would initialize it using public static fflib_SObjectUnitOfWork uow = new fflib_SObjectUnitOfWork(new List<SObjectType>{Case.SObjectType}, new customIDMLClass());. A CUSTOM IDML CLASS IS SUPER IMPORTANT IF YOU WANT TO MANAGE CRUD AND FLS!!! THE fflib_SObjectUnitOfWork class does not do that for you! So let’s check out an example of how to implement a custom IDML class together below.
Example of an IDML Class
//Implementing this class allows you to overcome to limitations of the regular unit of work class.
public with sharing class IDML_Example implements fflib_SObjectUnitOfWork.IDML
{
public void dmlInsert(List<SObject> objList){
//custom insert logic here
}
public void dmlUpdate(List<SObject> objList){
//custom update logic here
}
public void dmlDelete(List<SObject> objList){
//custom delete logic here
}
public void eventPublish(List<SObject> objList){
//custom event publishing logic here
}
public void emptyRecycleBin(List<SObject> objList){
//custom empty recycle bin logic here
}
}
fflib_SObjectUnitOfWork class method cheat sheet
This does not encompass all methods in the fflib_SObjectUnitOfWork class, however it does cover the most commonly used methods. There are also methods in this class to publish platform events should you need them but they aren’t covered below.
The Service Layer, “Defines an application’s boundaries with a layer of services that establishes a set of available operations and coordinates the application’s response in each operation”. – Martin Fowler
This essentially just means that the service layer should house your business logic. It should be a centralized place that holds code that represents business logic for each object (database table) or the service layer logic for a custom built app in your org (more common when building managed packages).
Difference between the Service Layer and Domain Layer – People seem to often confuse this layer with the Domain layer. The Domain layer is only for object specific default operations (triggers, validations, updates that should always execute on a database transaction, etc). The Service layer is for business logic for major modules/applications in your org. Sometimes that module is represented by an object, sometimes it is represented by a grouping of objects. Domain layer logic is specific to each individual object whereas services often are not.
Service Layer Naming Conventions
Class Names – Your service classes should be named after the area of the application your services represent. Typically services classes are created for important objects or applications within your org.
Service Class Name Examples (Note that I prefer underscores in class names, this is just personal preference):
Account_Service
DocumentGenerationApp_Service
Method Names – The public method names should be the names of the business operations they represent. The method names should reflect what the end users of your system would refer to the business operation as. Service layer methods should also ideally always be static.
Method Parameter Types and Naming – The method parameters in public methods for the service layer should typically only accept collections (Map, Set, List) as the majority of service layer methods should be bulkified (there are some scenarios however that warrant non-collection types). The parameters should be named something that reflects the data they represent.
Service Class Method Names and Parameter Examples:
public static void calculateOpportunityProfits(List<Account> accountsToCalculate)
public static void generateWordDocument(Map<String, SObject> sObjectByName)
Service Layer Security
Service Layer Security Enforcement – Service layers hold business logic so by default they should at minimum use inherited sharing when declaring the classes, however I would suggest always using with sharing and allowing developers to elevate the code to run without sharing when necessary by using a private inner class.
Example Security for a Service Layer Class:
public with sharing class Account_Service{
public static void calculateOpportunityProfits(List<Account> accountsToCalculate){
//code here
new Account_Service_WithoutSharing().calculateOpportunityProfits_WithoutSharing(accountsToCalculate);
}
private without sharing class Account_Service_WithoutSharing{
public void calculateOpportunityProfits_WithoutSharing(List<Account> accountsToCalculate){
//code here
}
}
}
Service Layer Code Best Practices
Keeping the code as flexible as possible
You should make sure that the code in the service layer does not expect the data passed to it to be in any particular format. For instance, if the service layer code is expecting a List of Accounts that has a certain set of fields filled out, your service method has just become very fragile. What if the service needs an additional field on that list of accounts to be filled out in the future to do its job? Then you have to refactor all the places building lists of data to send to that service layer method.
Instead you could pass in a set of Account Ids, have the service method query for all the fields it actually requires itself, and then return the appropriate data. This will make your service layer methods much more flexible.
Transaction Management
Your service layer method should handle transaction management (either with the unit of work pattern or otherwise) by making sure to leverage Database.setSavePoint() and using try catch blocks to rollback when the execution fails.
Transaction management example
public static void calculateOpportunityProfits(Set<Id> accountIdsToCalculate){
List<Account> accountsToCalculate = [SELECT Id FROM Account WHERE Id IN : accountIdsToCalculate];
System.Savepoint savePoint = Database.setSavePoint();
try{
database.insert(accountsToCalculate);
}
catch(Exception e){
Database.rollback(savePoint);
throw e;
}
}
Compound Services
Sometimes code needs to call more than one method in the service layer of your code. In this case instead of calling both service layer methods from your calling code like in the below example, you would ideally want to create a compound service method in your service layer.
The reason the above code is detrimental is that you would either have one of two side effects. The transaction management would only be separately by each method and one could fail and the other could complete successfully, despite the fact we don’t actually want that to happen. Alternatively you could handle transaction management in the class calling the service layer, which isn’t ideal either.
Instead we should create a new method in the service layer that combines those methods and handles the transaction management in a cleaner manner.
To find out how to implement the Service Layer using the Apex Common Library, continue reading here: Implementing the Service Layer with the Apex Common Library . If you’re not interested in utilizing the Apex Common Library, no worries, there are really no frameworks to implement a Service Layer (to my knowledge) because this is literally just a business logic layer so every single orgs service layer will be different. The only thing Apex Common assists with here is abstracting the service layer to assist with Unit Test mocking and to make your service class instantiations more dynamic.
Libraries That Could Be Used for the Service Layer
None to my knowledge although the Apex Common Library provides a good foundation for abstracting your service layers to assist with mocking and more dynamic class instantiations.
Service Layer Examples
Apex Common Example (Suggested)
All three of the below classes are tied together. We’ll go over how this works in the next section.
There is NO FRAMEWORK that can be made for service layer classes. This is a business logic layer and it will differ everywhere. No two businesses are identical. That being said, if you would like to leverage all of the other benefits of the Apex Common Library (primarily Apex Mocks) and you would like your service classes to be able to leverage the fflib_Application class to allow for dynamic runtime logic generation, you’ll need to structure your classes as outlined below. If you don’t want to leverage these things, then don’t worry about doing what is listed below… but trust me, in the long run it will likely be worth it as your org grows in size.
The Service Interface
For every service layer class you create you will create an interface (or potentially a virtual class you can extend) that your service layer implementation class will implement (more on that below). This interface will have every method in your class represented in it. An example of a service interface is below. Some people like to prefix their interfaces with the letter I (example: ICaseService), however I prefer to postfix it with _I or _Interface as it’s a bit clearer in my opinion.
This methods in this interface should represent all of the public methods you plan to create for this service class. Private methods should not be represented here.
public interface Task_Service_Interface
{
void createTasks(Set<Id> recordIds, Schema.SObjectType objectType);
}
The Service Layer Class
This class is where things get a little confusing in my opinion, but here’s the gist of it. This is the class you will actually call in your apex controllers (or occasionally domain classes) to actually execute the code… however there are no real implementation details in it (that exists in the implementation class outlined below). The reason this class sits in as a kind of middle man is because we want, no matter what business logic is actually called at run time, for our controller classes, batch classes, domain classes, etc to not need to alter the class they call to get the work done. In the Service Factory section below we’ll see how that becomes a huge factor. Below is an example of the Service Layer class setup.
//This class is what every calling class will actually call to. For more information on the //Application class check out the fflib_Application class
//part of this wiki.
public with sharing class Task_Service
{
//This literally just calls the Task_Service_Impl class's createTasks method
global static void createTasks(Set<Id> recordIds, Schema.SObjectType objectType){
service().createTasks(recordIds, objectType);
}
//This gets an instance of the Task_Service_Impl class from our Application class.
//This method exists for ease of use in the other methods
//in this class
private static Task_Service_Interface service(){
return (Task_Service_Interface)
Application.service.newInstance(Task_Service_Interface.class);
}
}
The Service Implementation Class
This is the concrete business logic implementation. This is effectively the code that isn’t super abstract, but is the more custom built business logic specific to the specific business (or business unit) that needs it to be executed. Basically, this is where your actual business logic should reside. Now, again, you may be asking, but Matt… why not just create a new instance of this class and just use it? Why create some silly interface and some middle man class to call this class. This isn’t gonna be superrrrrrr simple to wrap your head around, but bear with me. In the next section we tie all these classes together and paint the bigger picture. An example of a Service Implementation class is below.
/**
* @description This is the true implementation of your business logic for your service layer.
These impl classes
* are where all the magic happens. In this case this is a service class that executes the
business logic for Abstract
* Task creation on any theoretical object.
*/
public with sharing class Task_Service_Impl implements Task_Service_Interface
{
//This method creates tasks and MUST BE IMPLEMENTED since we are implementing the
//Task_Service_Interface
public void createTasks(Set<Id> recordIds, Schema.SObjectType objectType)
{
//Getting a new instance of a domain class based purely on the ids of our
//records, if these were case
//ids it would return a Case object domain class, if they were contacts it
//would return a contact
//object domain class
fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
//Getting a new instance of our selector class based purely on the object type
//passed. If we passed in a case
//object type we would get a case selector, a contact object type a contact
//selector, etc.
fflib_ISObjectSelector objectSelector =
Application.selector.newInstance(objectType);
//We're creating a new unit of work instance from our Application class.
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance();
//List to hold our records that need tasks created for them
List<SObject> objectsThatNeedTasks = new List<SObject>();
//If our selector class is an instance of Task_Selector_Interface (if it
//implement the Task_Selector_Interface
//interface) call the selectRecordsForTasks() method in the class. Otherwise
//just call the selectSObjectsById method
if(objectSelector instanceof Task_Selector_Interface){
Task_Selector_Interface taskFieldSelector =
(Task_Selector_Interface)objectSelector;
objectsThatNeedTasks = taskFieldSelector.selectRecordsForTasks();
}
else{
objectsThatNeedTasks = objectSelector.selectSObjectsById(recordIds);
}
//If our domain class is an instance of the Task_Creator_Interface (or
//implements the Task_Creator_Interface class)
//call the createTasks method
if(objectDomain instanceof Task_Creator_Interface){
Task_Creator_Interface taskCreator =
(Task_Creator_Interface)objectDomain;
taskCreator.createTasks(objectsThatNeedTasks, unitOfWork);
}
//Try commiting the records we've created and/or updated in our unit of work
//(we're basically doing all our DML at
//once here), else throw an exception.
try{
unitOfWork.commitWork();
}
catch(Exception e){
throw e;
}
}
}
The fflib_Application.ServiceFactory class
The fflib_Application.ServiceFactory class… what is it and how does it fit in here. Well, if you read through all of Part 4: The fflib_Application Class then you hopefully have some solid background on what it’s used for and why, but it’s a little trickier to conceptualize for the service class so let’s go over it a bit again. Basically it leverages The Factory Pattern to dynamically generate the correct code implementations at run time (when your code is actually running).
This is awesome for tons of stuff, but it’s especially awesome for the service layer. Why? You’ll notice as your Salesforce instance grows so do the amount of interested parties. All of the sudden you’ve gone from one or two business units to 25 different business units and what happens when those businesses need the same type of functionality with differing logic? You could make tons of if else statements determining what the user type is and then calling different methods based on that users type… but maybe there’s an easier way. If you are an ISV (a managed package provider) what I’m about to show you is likely 1000 times more important for you. If your product grows and people start adopting it, you absolutely need a way to allow flexibility in your applications business logic, maybe even allow them to write their own logic and have a way for your code to execute it??
Let’s check out how allllllllllll these pieces come together below.
Tying all the classes together
Alright, let’s tie everything together piece by piece. Pretend we’ve got a custom metadata type that maps our service interfaces to a service class implementation and a custom user permission (or if you don’t wanna pretend you can check it out here). Let’s first start by creating our new class that extends the fflibApplication.ServiceFactory class and overrides its newInstance method.
/*
@description: This class is an override for the prebuilt fflib_Application.ServiceFactory
that allows
us to dynamically call service classes based on the running users custom permissions.
*/
public with sharing class ServiceFactory extends fflib_Application.ServiceFactory
{
Map<String, Service_By_User_Type__mdt> servicesByUserPermAndInterface = new
Map<String, Service_By_User_Type__mdt>();
public ServiceFactory(Map<Type, Type> serviceInterfaceByServiceImpl){
super(serviceInterfaceByServiceImpl);
this.servicesByUserPermAndInterface = getServicesByUserPermAndInterface();
}
//Overriding the fflib_Application.ServiceFactory newInstance method to allow us to
//initialize a new service implementation type based on the
//running users custom permissions and the interface name passed in.
public override Object newInstance(Type serviceInterfaceType){
for(Service_By_User_Type__mdt serviceByUser:
servicesByUserPermAndInterface.values()){
if(servicesByUserPermAndInterface.containsKey(serviceByUser.User_Permission__c
+ serviceInterfaceType)){
Service_By_User_Type__mdt overrideClass =
servicesByUserPermAndInterface.get(serviceByUser.User_Permission__c +
serviceInterfaceType.getName());
return
Type.forName(overrideClass.Service_Implementation_Class__c).newInstance();
}
}
return super.newInstance(serviceInterfaceType);
}
//Creating our map of overrides by our user custom permissions
private Map<String, Service_By_User_Type__mdt> getServicesByUserPermAndInterface(){
Map<String, Service_By_User_Type__mdt> servicesByUserType =
new Map<String, Service_By_User_Type__mdt>();
for(Service_By_User_Type__mdt serviceByUser:
Service_By_User_Type__mdt.getAll().values()){
//Checking to see if running user has any of the permissions for our
//overrides, if so we put the overrides in a map
if(FeatureManagement.checkPermission(serviceByUser.User_Permission__c)){
servicesByUserType.put(serviceByUser.User_Permission__c +
serviceByUser.Service_Interface__c, serviceByUser);
}
}
return servicesByUserType;
}
}
Cool kewl cool, now that we have our custom ServiceFactory built to manage our overrides based on the running users custom permissions, we can leverage it in the Application Factory class we’ve hopefully built by now like so:
public with sharing class Application
{
//Domain, Selector and UOW factories have been omitted for brevity, but should be added
//to this class
//This allows us to create a factory for instantiating service classes. You send it
//the interface for your service class
//and it will return the correct service layer class
//Exmaple initialization: Object objectService =
//Application.service.newInstance(Task_Service_Interface.class);
public static final fflib_Application.ServiceFactory service =
new ServiceFactory(
new Map<Type, Type>{Task_Service_Interface.class =>
Task_Service_Impl.class});
}
Ok we’ve done the hardest parts now. Next we need to pretend that we are using the service class interface, service implementation class and service class that we already built earlier (just above you, scroll up to those sections and review them if you forgot), because we’ve about to see how a controller would call this task service we’ve built.
public with sharing class Abstract_Task_Creator_Controller
{
@AuraEnabled
public static void createTasks(Id recordId){
Set<Id> recordIds = new Set<Id>{recordId};
Schema.SObjectType objectType = recordId.getSobjectType();
try{
Task_Service.createTasks(recordIds, objectType);
}
catch(Exception e){
throw new AuraHandledException(e.getMessage());
}
}
}
Now you might be wracking your brain right now and being like… ok, so what… but look closer Simba. This controller will literally never grow, neither will your Application class or your ServiceFactory class we’ve built above (well the Application class might, but very little). This Task_Service middle man layer is so abstract you can swap out service implementations on the fly whenever you want and this controller will NEVER NEED TO BE UPDATED (at least not for task service logic)! Basically the only thing that will change at this point is your custom metadata type (object), the custom permissions you map to users and you’ll add more variations of the Task Service Implementation classes throughout time for your various business units that get onboarded and want to use it. However, your controllers (and other places in the code that call the service) will never know the difference. Wyld right. If you’re lost right now lets follow the chain of events step by step in order to clarify some things:
1) Controller calls the Task_Service class’s (the middleman) createTasks() method. 2) Task_Service’s createTasks() method calls its service() method. 3) The service() method uses the Application classes “service” variable, which is an instance of our custom ServiceFactory class (shown above) to create a new instance of our whatever Task Implementation class (which inherits from the Task_Service_Interface class making it of type Task_Service_Interface) is relevant for our users assigned custom permissions by using the newInstance() method the ServiceFactory class overrode. 4) The service variable returns the correct Task Service Implementation for the running user. 5) The createTasks() method is called for whatever Task Service Implementation was determined to be correct for the running user. 6) Tasks are created!
If you’re still shook by all this, please, watch the video where we build all this together step by step and walk through everything. I promise, even if it’s a bit confusing, it’s worth the time to learn.