The Apex Common Library is an open source library originally created by Andy Fawcett when he was the CTO of FinancialForce and currently upkept by many community members, but most notably John Daniel. Aside from its origins and the fflib_ in the class names, it is no longer linked to FinancialForce in any way.
The library was originally created because implementing the Separation of Concerns Design Principle is difficult no matter what tech stack you’re working in. For Salesforce, the Apex Common Library was built to simplify the process of implementing Separation of Concerns as well as assist in managing DML transactions, creating high quality unit tests (you need the Apex Mocks library to assist with this) and enforcing coding and security best practices. If you want an exceptionally clean, understandable and flexible code base, the Apex Common library will greatly assist you in those endeavors.
Does The Apex Common Library Implement Separation of Concerns for me Automatically?
Unfortunately it’s not that simple. This library doesn’t just automatically do this for you, no library could, but what it does is give you the tools to easily implement this design principle in your respective Salesforce Org or Managed Package. Though there are many more classes in the Apex Common Library, there are four major classes to familiarize yourself with to be able to implement this, four object oriented programming concepts and three major design patterns. Additionally it’s beneficial if you understand the difference between a Unit Test and an Integration Test. We’ll go over all of these things below.
The Four Major Classes
1) fflib_Application.cls – This Application class acts as a way to easily implement the Factory pattern for building the different layers when running your respective applications within your org (or managed package). When I say “Application” for an org based implementation this could mean a lot of things, but think of it as a grouping of code that represents a specific section of your org. Maybe you have a service desk in your org, that service desk could be represented as an “Application”. This class and the factory pattern are also what makes the Apex Mocks Library work, without implementing it, Apex Mocks will not work.
2) fflib_SObjectDomain.cls – This houses the base class that all Domain classes you create will extend. The many methods within this class serve to make your life considerably easier when building your domain classes, for each object that requires a trigger, out. You can check out my Apex Common Domain Layer Implementation Guide for more details.
3) fflib_SObjectSelector.cls – This houses the base class that all Selector classes you create will extend. The many methods within this class will serve to make your life a ton easier when implementing a selector classes for your various objects in your org. You can check out my Apex Common Selector Layer Implementation Guide
1) Inheritance) – When a class inherits (or extends) another class and the sub class gets access to all of its publicly accessible methods and variables.
2) Polymorphism) – When a class uses overloaded methods or overrides an inherited classes methods.
3) Encapsulation) – Only publishing (or making public) methods and class variables that are needed for other classes to use it.
4) Interfaces) – An interface is a contract between it and a class that implements it to make sure the class has specific method signatures implemented.
What is the Separation of Concerns Design Principle?
Basically separation of concerns is the practice of putting logical boundaries on your code. Putting these logical boundaries on your code helps make your code easier to understand, easier to maintain and much more flexible when it needs to be altered (and every code base ever has to be altered all the time).
In the Salesforce Ecosystem there are three major areas of concern we ideally should separate our code into. They are the following:
The Service Layer:
The Service Layer should house 100% of your non-object specific business logic (object specific logic is most often handled by the domain layer). This is, the logic that is specific to your organizations specific business rules. Say for instance you have a part of your Salesforce App that focuses on Opportunity Sales Projections and the Opportunity Sales Projection App looks at the Oppotunity, Quote, Product and Account objects. You might make an OpportunitySalesProjection_Service apex class that houses methods that have business logic that is specific to your Opportunity Sales Projection App. More information on the Service Layer here.
The Domain Layer:
The Domain Layer houses your individual objects (database tables) trigger logic. It also houses object specific validation logic, logic that should always be applied on the insert of every record for an object and object specific business logic (like how a task my be created for a specific object type, etc). If you used the Account object in your org you should create a Domain class equivalent for the Account object through the use of a trigger handler class of some sort. More information on the Domain Layer here.
The Selector Layer:
The Selector Layer is responsible for querying your objects (database tables) in Salesforce. Selector layer classes should be made for each individual object (or grouping of objects) that you intend to write queries for in your code. The goal of the selector layer is to maintain query consistency (consistency in ordering, common fields queried for, etc) and to be able to reuse common queries easily and not re-write them over and over again everywhere.
Why is it Useful?
There are many benefits to implementing SoC, most of which were outlined above, but here are the highlights:
1) Modularizes your code into easy to understand packages of code making it easier to know what code controls what, why and when.
2) Massively reduces the amount of code in your org by centralizing your logic into different containers. For instance, maybe you currently have 13 different apex controllers that house similar case business logic. If you placed that business logic into a service class and had all 13 apex controllers call that service class instead your life would be a whole lot simpler. This can get a lot more abstract and turn into absolutely unprecedented code reduction, but we have to start somewhere a bit simpler.
3) Separation of Concerns lends itself to writing extremely well done and comprehensive Unit Tests. It allows for easy dependency injection which allows you to, in test classes, mock a classes dependent classes. We’ll go over this more when we get to the Unit testing and Apex Mocks section of this tutuorial, but if you want a quick and easy explanation, please feel free to check out my video covering dependency injection and mocking in apex.
How does the Apex Common Library help with SoC?
The Apex Common Library was quite literally built upon the three layers outlined above. It provides an unrivaled foundation to implement SoC in your Salesforce org. When I started this tutorial series I was not convinced it was the absolute best choice out there, but after hundreds of hours of practice, documentation, experimentation with other similar groupings of libraries, etc I feel I can confidently say (as of today) that this is something the community is lucky even exists and needs to be leveraged much more than it is today.
Example Code
All of the code examples in this repo are examples of SoC in action. You can check the whole repo out here. For layer specific examples check out the layer specific pages of this wiki.
Quality question… I mean honestly wtf is this thing? Lol, sorry, let’s figure it out together. The fflib_Application class is around for two primary purposes. The first is to allow you an extremely abstract way of creating new instances of your unit of work, service layer, domain layer and selector layer in the Apex Common Library through the use of the factory pattern. The second is that implementing this application class is imperative if you want to leverage the Apex Mocks unit testing library. It depends on this Application Factory being implemented.
Most importantly though, if you understand how interfaces, inheritance and polymorphism work implementing this class allows you to write extremely abstract Salesforce implementations, which we’ll discuss more in sections below
Why is this class used?
Ok, if we ignore the fact that this is required for us to use the Apex Mocks library, understanding the power behind this class requires us to take a step back and formulate a real world Salesforce use case for implementing it… hopefully the following one will be easy for everyone to understand.
Say for instance I have a decent sized Salesforce instance and our business has a use case to create tasks across multiple objects and the logic for creating those tasks are unique to every single object. Maybe on the Account object we create three new tasks every single time we create an account and on the Contact object we create two tasks every single time a record is created or updated in a particular way and we ideally want to call this logic on the fly from anywhere in our system.
No matter what we should probably place the task creation logic in our domain layer because it’s relevant to each individual object, but pretend for a second that we have like 20 different objects we need this kind of functionality on. Maybe we need the executed logic in an abstract “task creator” button that can be placed on any lightning app builder page and maybe some overnight batch jobs need to execute the logic too.
Well… what do we do? Let’s just take the abstract “Task Creator” button we might want to place on any object in our system. We could call each individual domain layer class’s task creation logic in the code based on the object we were on (code example below), but that logic tree could get massive and it’s not super ideal.
Task Service example with object logic tree
public with sharing class Task_Service_Impl
{
//This method calls the task creators for each object type
public void createTasks(Set recordIds, Schema.SObjectType objectType)
{
if(objectType == Account.getSObjectType()){
new Accounts().createTasks(recordIds);
}
else if(objectType == Case.getSObjectType()){
new Cases().createTasks(recordIds);
}
else if(objectType == Opportunity.getSObjectType()){
new Opportunities().createTasks(recordIds);
}
else if(objectType == Taco__c.getSObjectType()){
new Tacos().createTasks(recordIds);
}
else if(objectType == Chocolate__c.getSObjectType()){
new Chocolates().createTasks(recordIds);
}
//etc etc for each object could go on for decades
}
}
Maybe… just maybe there’s an easier way. This is where the factory pattern and the fflib_Application class come in handy. Through the use of the factory pattern we can create an abstract Task Service that can (based on a set of records we pass to it) select the right business logic to execute in each domain layer dynamically.
//Creation of the Application factory class
public with sharing class Application
{
public static final fflib_Application.ServiceFactory service =
new fflib_Application.ServiceFactory(
new Map<Type, Type>{
Task_Service_Interface.class => Task_Service_Impl.class}
);
public static final fflib_Application.DomainFactory domain =
new fflib_Application.DomainFactory(
Application.selector,
new Map<SObjectType, Type>{Case.SObjectType => Cases.Constructor.class,
Opportunity.SObjectType => Opportunities.Constructor.class,
Account.SObjectType => Accounts.Constructor.class,
Taco__c.SObjectType => Tacos.Constructor.class,
Chocolate__c.SObjectType => Chocolates.Constructor.class}
);
}
//The task service that anywhere can call and it will operate as expected with super minimal logic
public with sharing class Task_Service_Impl implements Task_Service_Interface
{
//This method calls the task creators for each object type
public void createTasks(Set<Id> recordIds, Schema.SObjectType objectType)
{
fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
if(objectDomain instanceof Task_Creator_Interface){
Task_Creator_Interface taskCreator = (Task_Creator_Interface)objectDomain;
taskCreator.createTasks(recordIds);
}
}
}
You might be lookin at the two code examples right now like wuttttttttt how thooooo?? And I just wanna say, I fully understand that. The first time I saw this implemented I thought the same thing, but it’s a pretty magical thing. Thanks to the newInstance() methods on the fflib_Application class and the Task_Creator_Interface we’ve implemented on the domain classes, you can dynamically generate the correct domain when the code runs and call the create tasks method. Pretty wyld right? Also if you’re thinkin, “Yea that’s kinda nifty Matt, but you had to create this Application class and that’s a bunch of extra code.” you need to step back even farther. This Application factory can be leveraged ANYWHERE IN YOUR ENTIRE CODEBASE! Not just locally in your service class. If you need to implement something similar to automatically generate opportunities or Accounts or something from tons of different objects you can leverage this exact same Application class there. In the long run, this ends up being wayyyyyyyyy less code.
If you want a ton more in depth explanation on this, please watch the tutorial video. We code a live example together so I can explain this concept. It’s certainly not easy to grasp at first glance.
fflib_Application inner classes and methods cheat sheet
Inside the fflib_Application class there are four classes that represent factories for the your unit of work, service layer, domain layer and selector layer.
//The constructor for this class requires you to pass a list of SObject types in the dependency order. So in this instance Accounts would always be inserted before your Contacts and Contacts before Cases, etc.
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
After creating this unit of work variable above ^ in your Application class example here there are four important new instance methods you can leverage to generate a new unit of work:
1) newInstance() – This creates a new instance of the unit of work using the SObjectType list passed in the constructor.
newInstance() Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance();
}
}
newInstance(fflib_SObjectUnitOfWork.IDML dml) Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
//Custom IDML implementation
public with sharing class IDML_Example implements fflib_SObjectUnitOfWork.IDML
{
void dmlInsert(List<SObject> objList){
//custom insert logic here
}
void dmlUpdate(List<SObject> objList){
//custom update logic here
}
void dmlDelete(List<SObject> objList){
//custom delete logic here
}
void eventPublish(List<SObject> objList){
//custom event publishing logic here
}
void emptyRecycleBin(List<SObject> objList){
//custom empty recycle bin logic here
}
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance(new IDML_Example());
}
}
3) newInstance(List <SObjectType> objectTypes) – This creates a new instance of the unit of work and overwrites the SObject type list passed in the constructor so you can have a custom order if you need it.
newInstance(List <SObjectType> objectTypes) Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance(new List<SObjectType>{
Case.SObjectType,
Account.SObjectType,
Task.SObjectType,
Contact.SObjectType,
});
}
}
newInstance(List objectTypes, fflib_SObjectUnitOfWork.IDML dml) Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
//Custom IDML implementation
public with sharing class IDML_Example implements fflib_SObjectUnitOfWork.IDML
{
void dmlInsert(List<SObject> objList){
//custom insert logic here
}
void dmlUpdate(List<SObject> objList){
//custom update logic here
}
void dmlDelete(List<SObject> objList){
//custom delete logic here
}
void eventPublish(List<SObject> objList){
//custom event publishing logic here
}
void emptyRecycleBin(List<SObject> objList){
//custom empty recycle bin logic here
}
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance(new List<SObjectType>{
Case.SObjectType,
Account.SObjectType,
Task.SObjectType,
Contact.SObjectType,
}, new IDML_Example());
}
}
//This allows us to create a factory for instantiating service classes. You send it the interface for your service class
//and it will return the correct service layer class
//Exmaple initialization: Object objectService = Application.service.newInstance(Task_Service_Interface.class);
public static final fflib_Application.ServiceFactory service =
new fflib_Application.ServiceFactory(new Map<Type, Type>{
SObject_SharingService_Interface.class => SObject_SharingService_Impl.class
});
After creating this service variable above ^ in your Application class example here there is one important new instance method you can leverage to generate a new service class instance:
1) newInstance(Type serviceInterfaceType) – This method sends back an instance of your service implementation class based on the interface you send in to it.
newInstance(Type serviceInterfaceType) Example method call:
//This is using the service variable above that we would've created in our Application class
Application.service.newInstance(Task_Service_Interface.class);
//This allows us to create a factory for instantiating selector classes. You send it an object type and it sends
//you the corresponding selectory layer class.
//Example initialization: fflib_ISObjectSelector objectSelector = Application.selector.newInstance(objectType);
public static final fflib_Application.SelectorFactory selector =
new fflib_Application.SelectorFactory(
new Map<SObjectType, Type>{
Case.SObjectType => Case_Selector.class,
Contact.SObjectType => Contact_Selector.class,
Task.SObjectType => Task_Selector.class}
);
After creating this selector variable above ^ in your Application class example here there are three important methods you can leverage to generate a new selector class instance:
1) newInstance(SObjectType sObjectType) – This method will generate a new instance of the selector based on the object type passed to it. So for instance if you have an Opportunity_Selector class and pass Opportunity.SObjectType to the newInstance method you will get back your Opportunity_Selector class (pending you have configured it this way in your Application class map passed to the class.
newInstance(SObjectType sObjectType) Example method call:
//This is using the selector variable above that we would've created in our Application class
Application.selector.newInstance(Case.SObjectType);
2) selectById(Set<Id> recordIds) – This method, based on the ids you pass will automatically call your registered selector layer class for the set of ids object type. It will then call the selectSObjectById method that all Selector classes must implement and return a list of sObjects to you.
selectById(Set<Id> recordIds) Example method call:
//This is using the selector variable above that we would've created in our Application class
Application.selector.selectById(accountIdSet);
3) selectByRelationship(List<sObject> relatedRecords, SObjectField relationshipField) – This method, based on the relatedRecords and the relationship field passed to it will generate a selector layer class for the object type in the relationship field. So say you were querying the Contact object and you wanted an Account Selector class, you could call this method it, pass the list of contacts you queried for and the AccountId field to have an Account Selector returned to you (pending that selector was configured in the Application show above in this wiki article).
selectByRelationship(List<sObject> relatedRecords, SObjectField relationshipField) Example method call:
//This is using the selector variable above that we would've created in our Application class
Application.selector.selectByRelationship(contactList, Contact.AccountId);
//This allows you to create a factory for instantiating domain classes. You can send it a set of record ids and
//you'll get the corresponding domain layer.
//Example initialization: fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
public static final fflib_Application.DomainFactory domain =
new fflib_Application.DomainFactory(
Application.selector,
new Map<SObjectType, Type>{Case.SObjectType => Cases.Constructor.class,
Contact.SObjectType => Contacts.Constructor.class}
);
After creating this domain variable above ^ in your Application class example here there are three important methods you can leverage to generate a new domain class instance:
1) newInstance(Set <Id> recordIds) – This method creates a new instance of your domain class based off the object type in the set of ids you pass it.
newInstance(Set<Id> recordIds) Example method call:
Application.domain.newInstance(accountIdSet);
2) newInstance(List<sObject> records) – This method creates a new instance of your domain class based off the object type in the list of records you pass it.
newInstance(List<sObject> records) Example method call:
In every factory class inside the fflib_Application class there is a setMock method. These methods are used to pass in mock/fake versions of your classes for unit testing purposes. Make sure to leverage this method if you are planning to do unit testing. Leveraging this method eliminates the need to use dependency injection in your classes to allow for mocking. There are examples of how to leverage this method in the Implementing Mock Unit Testing with Apex Mocks section of this wiki.
A Unit of Work, “Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems”.
The goal of the unit of work pattern is to simplify DML in your code and only commit changes to the database/objects when it’s truly time to commit. Considering the many limits around DML in Salesforce, it’s important to employ this pattern in your org in some way. It’s also important to note that this, “maintains a list of objects affected by a business transaction”, which indicates that the UOW pattern should be prevalent in your service layer (The service layer houses business logic).
The UOW pattern also ensures we don’t have data inconsistencies in our Salesforce instance. It does this by only committing work when all the DML operations complete successfully. It rolls back our transactions when any DML fails in our unit of work.
Benefits of the using the Unit of Work Pattern in Salesforce
There are several, but here are the biggest of them all… massive amounts of code reduction, having consistency with your DML transactions, doing the minimal DML statements feasible (bulkification) and DML mocking in unit tests. Let’s figure out how we reduce the code and make it more consistent first.
The Code Reduction and Consistency
Think about all the places in your codebase where you insert records, error handle the inserting of your records and manage the transactional state of your records (Savepoints). Maybe if your org is new there’s not a ton happening yet, but as it grows the amount of code dealing with that can become enormous and, even worse, inconsistent. I’ve worked in 12 year old orgs that had 8000+ lines of code just dedicated to inserting records throughout the system and with every dev who wrote the code a new variety of transaction management took place, different error handling (or none at all), etc.
Code Bulkification
The unit of work pattern also helps a great deal with code bulkification. It encourages you to to finish creating and modifying 100% of your records in your transaction prior to actually committing them (doing the dml transactions) to the database (objects). It makes sure that you are doing that absolute minimal transactions necessary to be successful. For instance, maybe for some reason in your code you are updating cases in one method, and when you’re done you call another method and it updates those same cases… why do that? You could register all those updates and update all those cases at once with one DML statement. Whether you realize it at the time or not, even dml statement counts… use them sparingly.
DML Mocking for Unit Tests
If you’re not sure what mocking and unit test are, then definitely check out my section on that in the wiki here. Basically, in an ideal scenario you would like to do unit testing, but unit testing depends on you having the ability to mock classes for you tests (basically creating fake versions of your class you have complete control over in your tests). Creating this layer that handles your dml transactions allows you to mock that layer in your classes when doing unit tests… If this is confusing, no worries, we’ll discuss it a bunch more later in the last three sections of this wiki.
It is a foundation built to allow you to leverage the unit of work design pattern from within Salesforce. Basically this class is designed to hold your database operations (insert, update, etc) in memory until you are ready to do all of your database transactions in one big transaction. It also handles savepoint rollbacks to ensure data consistentcy. For instance, if you are inserting Opportunities with Quotes in the same database (DML) transaction, chances are you don’t wanna insert those Opportunities if your Quotes fail to insert. The unit of work class is setup to automatically handle that transaction management and roll back if anything fails.
If also follows bulkification best practices to make your life even easier dealing with DML transactions.
Why is this class used?
This class is utilized so that you can have super fine control over your database transactions and so that you only do DML transactions when every single record is prepped and ready to be inserted, updated, etc.
Additionally there are two reasons it is important to leverage this class (or a class like it): 1) To allow for DML mocking in your test classes. 2) To massively reduce duplicate code for DML transactions in your org. 3) To make DML transaction management consistent
Think about those last two for a second… how many lines of code in your org insert, update, upsert (etc) records in your org? Then think about how much code also error handles those transaction and (if you’re doing things right) how much code goes into savepoint rollbacks. That all adds up over time to a ton of code. This class houses it all in one centralized apex class. You’ll never have to re-write all that logic again.
How to Register a Callback method for an Apex Commons UOW
The following code example shows you how to setup a callback method for your units of work using the fflib_SObjectUnitOfWork.IDoWork interface, should you need them.
public inherited sharing class HelpDeskAppPostCommitLogic implements fflib_SObjectUnitOfWork.IDoWork{
List<Task> taskList;
public HelpDeskAppPostCommitLogic(List<Task> taskList){
this.taskList = taskList;
}
public void doWork(){
//write callback code here
}
}
The code below shows you how to actually make sure your unit of work calls your callback method.
fflib_ISObjectUnitOfWork uow = Helpdesk_Application.helpDeskUOW.newInstance();
//code to create some tasks
uow.registerNew(newTasks);
uow.registerWork(new HelpDeskAppPostCommitLogic(newTasks));
uow.commitWork();
Apex Commons Unit of Work Limitations
1) Records within the same object that have lookups to each other are currently not supported. For example, if the Account object has a Lookup to itself, that relationship cannot be registered.
2) You cannot do all or none false database transactions without creating a custom IDML implementation.
Database.insert(acctList, false);
3) To send emails with the Apex Commons UOW you must utilize the special registerEmail method.
4) It does not manage FLS and CRUD without implementing a custom class that implements the IDML interface and does that for you.
How and When to use the fflib_SObjectUnitOfWork IDML Interface
If your unit of work needs a custom implementation for inserting, updating, deleting, etc that is not supported by the SimpleDML inner class then you are gonna want to create a new class that implements the fflib_SObjectUnitOfWork.IDML interface. After you create that class if you were using the Application factory you would instantiate your unit of work like so Application.uow.newInstance(new customIDMLClass()); otherwise you would initialize it using public static fflib_SObjectUnitOfWork uow = new fflib_SObjectUnitOfWork(new List<SObjectType>{Case.SObjectType}, new customIDMLClass());. A CUSTOM IDML CLASS IS SUPER IMPORTANT IF YOU WANT TO MANAGE CRUD AND FLS!!! THE fflib_SObjectUnitOfWork class does not do that for you! So let’s check out an example of how to implement a custom IDML class together below.
Example of an IDML Class
//Implementing this class allows you to overcome to limitations of the regular unit of work class.
public with sharing class IDML_Example implements fflib_SObjectUnitOfWork.IDML
{
public void dmlInsert(List<SObject> objList){
//custom insert logic here
}
public void dmlUpdate(List<SObject> objList){
//custom update logic here
}
public void dmlDelete(List<SObject> objList){
//custom delete logic here
}
public void eventPublish(List<SObject> objList){
//custom event publishing logic here
}
public void emptyRecycleBin(List<SObject> objList){
//custom empty recycle bin logic here
}
}
fflib_SObjectUnitOfWork class method cheat sheet
This does not encompass all methods in the fflib_SObjectUnitOfWork class, however it does cover the most commonly used methods. There are also methods in this class to publish platform events should you need them but they aren’t covered below.
The Service Layer, “Defines an application’s boundaries with a layer of services that establishes a set of available operations and coordinates the application’s response in each operation”. – Martin Fowler
This essentially just means that the service layer should house your business logic. It should be a centralized place that holds code that represents business logic for each object (database table) or the service layer logic for a custom built app in your org (more common when building managed packages).
Difference between the Service Layer and Domain Layer – People seem to often confuse this layer with the Domain layer. The Domain layer is only for object specific default operations (triggers, validations, updates that should always execute on a database transaction, etc). The Service layer is for business logic for major modules/applications in your org. Sometimes that module is represented by an object, sometimes it is represented by a grouping of objects. Domain layer logic is specific to each individual object whereas services often are not.
Service Layer Naming Conventions
Class Names – Your service classes should be named after the area of the application your services represent. Typically services classes are created for important objects or applications within your org.
Service Class Name Examples (Note that I prefer underscores in class names, this is just personal preference):
Account_Service
DocumentGenerationApp_Service
Method Names – The public method names should be the names of the business operations they represent. The method names should reflect what the end users of your system would refer to the business operation as. Service layer methods should also ideally always be static.
Method Parameter Types and Naming – The method parameters in public methods for the service layer should typically only accept collections (Map, Set, List) as the majority of service layer methods should be bulkified (there are some scenarios however that warrant non-collection types). The parameters should be named something that reflects the data they represent.
Service Class Method Names and Parameter Examples:
public static void calculateOpportunityProfits(List<Account> accountsToCalculate)
public static void generateWordDocument(Map<String, SObject> sObjectByName)
Service Layer Security
Service Layer Security Enforcement – Service layers hold business logic so by default they should at minimum use inherited sharing when declaring the classes, however I would suggest always using with sharing and allowing developers to elevate the code to run without sharing when necessary by using a private inner class.
Example Security for a Service Layer Class:
public with sharing class Account_Service{
public static void calculateOpportunityProfits(List<Account> accountsToCalculate){
//code here
new Account_Service_WithoutSharing().calculateOpportunityProfits_WithoutSharing(accountsToCalculate);
}
private without sharing class Account_Service_WithoutSharing{
public void calculateOpportunityProfits_WithoutSharing(List<Account> accountsToCalculate){
//code here
}
}
}
Service Layer Code Best Practices
Keeping the code as flexible as possible
You should make sure that the code in the service layer does not expect the data passed to it to be in any particular format. For instance, if the service layer code is expecting a List of Accounts that has a certain set of fields filled out, your service method has just become very fragile. What if the service needs an additional field on that list of accounts to be filled out in the future to do its job? Then you have to refactor all the places building lists of data to send to that service layer method.
Instead you could pass in a set of Account Ids, have the service method query for all the fields it actually requires itself, and then return the appropriate data. This will make your service layer methods much more flexible.
Transaction Management
Your service layer method should handle transaction management (either with the unit of work pattern or otherwise) by making sure to leverage Database.setSavePoint() and using try catch blocks to rollback when the execution fails.
Transaction management example
public static void calculateOpportunityProfits(Set<Id> accountIdsToCalculate){
List<Account> accountsToCalculate = [SELECT Id FROM Account WHERE Id IN : accountIdsToCalculate];
System.Savepoint savePoint = Database.setSavePoint();
try{
database.insert(accountsToCalculate);
}
catch(Exception e){
Database.rollback(savePoint);
throw e;
}
}
Compound Services
Sometimes code needs to call more than one method in the service layer of your code. In this case instead of calling both service layer methods from your calling code like in the below example, you would ideally want to create a compound service method in your service layer.
The reason the above code is detrimental is that you would either have one of two side effects. The transaction management would only be separately by each method and one could fail and the other could complete successfully, despite the fact we don’t actually want that to happen. Alternatively you could handle transaction management in the class calling the service layer, which isn’t ideal either.
Instead we should create a new method in the service layer that combines those methods and handles the transaction management in a cleaner manner.
To find out how to implement the Service Layer using the Apex Common Library, continue reading here: Implementing the Service Layer with the Apex Common Library . If you’re not interested in utilizing the Apex Common Library, no worries, there are really no frameworks to implement a Service Layer (to my knowledge) because this is literally just a business logic layer so every single orgs service layer will be different. The only thing Apex Common assists with here is abstracting the service layer to assist with Unit Test mocking and to make your service class instantiations more dynamic.
Libraries That Could Be Used for the Service Layer
None to my knowledge although the Apex Common Library provides a good foundation for abstracting your service layers to assist with mocking and more dynamic class instantiations.
Service Layer Examples
Apex Common Example (Suggested)
All three of the below classes are tied together. We’ll go over how this works in the next section.
There is NO FRAMEWORK that can be made for service layer classes. This is a business logic layer and it will differ everywhere. No two businesses are identical. That being said, if you would like to leverage all of the other benefits of the Apex Common Library (primarily Apex Mocks) and you would like your service classes to be able to leverage the fflib_Application class to allow for dynamic runtime logic generation, you’ll need to structure your classes as outlined below. If you don’t want to leverage these things, then don’t worry about doing what is listed below… but trust me, in the long run it will likely be worth it as your org grows in size.
The Service Interface
For every service layer class you create you will create an interface (or potentially a virtual class you can extend) that your service layer implementation class will implement (more on that below). This interface will have every method in your class represented in it. An example of a service interface is below. Some people like to prefix their interfaces with the letter I (example: ICaseService), however I prefer to postfix it with _I or _Interface as it’s a bit clearer in my opinion.
This methods in this interface should represent all of the public methods you plan to create for this service class. Private methods should not be represented here.
public interface Task_Service_Interface
{
void createTasks(Set<Id> recordIds, Schema.SObjectType objectType);
}
The Service Layer Class
This class is where things get a little confusing in my opinion, but here’s the gist of it. This is the class you will actually call in your apex controllers (or occasionally domain classes) to actually execute the code… however there are no real implementation details in it (that exists in the implementation class outlined below). The reason this class sits in as a kind of middle man is because we want, no matter what business logic is actually called at run time, for our controller classes, batch classes, domain classes, etc to not need to alter the class they call to get the work done. In the Service Factory section below we’ll see how that becomes a huge factor. Below is an example of the Service Layer class setup.
//This class is what every calling class will actually call to. For more information on the //Application class check out the fflib_Application class
//part of this wiki.
public with sharing class Task_Service
{
//This literally just calls the Task_Service_Impl class's createTasks method
global static void createTasks(Set<Id> recordIds, Schema.SObjectType objectType){
service().createTasks(recordIds, objectType);
}
//This gets an instance of the Task_Service_Impl class from our Application class.
//This method exists for ease of use in the other methods
//in this class
private static Task_Service_Interface service(){
return (Task_Service_Interface)
Application.service.newInstance(Task_Service_Interface.class);
}
}
The Service Implementation Class
This is the concrete business logic implementation. This is effectively the code that isn’t super abstract, but is the more custom built business logic specific to the specific business (or business unit) that needs it to be executed. Basically, this is where your actual business logic should reside. Now, again, you may be asking, but Matt… why not just create a new instance of this class and just use it? Why create some silly interface and some middle man class to call this class. This isn’t gonna be superrrrrrr simple to wrap your head around, but bear with me. In the next section we tie all these classes together and paint the bigger picture. An example of a Service Implementation class is below.
/**
* @description This is the true implementation of your business logic for your service layer.
These impl classes
* are where all the magic happens. In this case this is a service class that executes the
business logic for Abstract
* Task creation on any theoretical object.
*/
public with sharing class Task_Service_Impl implements Task_Service_Interface
{
//This method creates tasks and MUST BE IMPLEMENTED since we are implementing the
//Task_Service_Interface
public void createTasks(Set<Id> recordIds, Schema.SObjectType objectType)
{
//Getting a new instance of a domain class based purely on the ids of our
//records, if these were case
//ids it would return a Case object domain class, if they were contacts it
//would return a contact
//object domain class
fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
//Getting a new instance of our selector class based purely on the object type
//passed. If we passed in a case
//object type we would get a case selector, a contact object type a contact
//selector, etc.
fflib_ISObjectSelector objectSelector =
Application.selector.newInstance(objectType);
//We're creating a new unit of work instance from our Application class.
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance();
//List to hold our records that need tasks created for them
List<SObject> objectsThatNeedTasks = new List<SObject>();
//If our selector class is an instance of Task_Selector_Interface (if it
//implement the Task_Selector_Interface
//interface) call the selectRecordsForTasks() method in the class. Otherwise
//just call the selectSObjectsById method
if(objectSelector instanceof Task_Selector_Interface){
Task_Selector_Interface taskFieldSelector =
(Task_Selector_Interface)objectSelector;
objectsThatNeedTasks = taskFieldSelector.selectRecordsForTasks();
}
else{
objectsThatNeedTasks = objectSelector.selectSObjectsById(recordIds);
}
//If our domain class is an instance of the Task_Creator_Interface (or
//implements the Task_Creator_Interface class)
//call the createTasks method
if(objectDomain instanceof Task_Creator_Interface){
Task_Creator_Interface taskCreator =
(Task_Creator_Interface)objectDomain;
taskCreator.createTasks(objectsThatNeedTasks, unitOfWork);
}
//Try commiting the records we've created and/or updated in our unit of work
//(we're basically doing all our DML at
//once here), else throw an exception.
try{
unitOfWork.commitWork();
}
catch(Exception e){
throw e;
}
}
}
The fflib_Application.ServiceFactory class
The fflib_Application.ServiceFactory class… what is it and how does it fit in here. Well, if you read through all of Part 4: The fflib_Application Class then you hopefully have some solid background on what it’s used for and why, but it’s a little trickier to conceptualize for the service class so let’s go over it a bit again. Basically it leverages The Factory Pattern to dynamically generate the correct code implementations at run time (when your code is actually running).
This is awesome for tons of stuff, but it’s especially awesome for the service layer. Why? You’ll notice as your Salesforce instance grows so do the amount of interested parties. All of the sudden you’ve gone from one or two business units to 25 different business units and what happens when those businesses need the same type of functionality with differing logic? You could make tons of if else statements determining what the user type is and then calling different methods based on that users type… but maybe there’s an easier way. If you are an ISV (a managed package provider) what I’m about to show you is likely 1000 times more important for you. If your product grows and people start adopting it, you absolutely need a way to allow flexibility in your applications business logic, maybe even allow them to write their own logic and have a way for your code to execute it??
Let’s check out how allllllllllll these pieces come together below.
Tying all the classes together
Alright, let’s tie everything together piece by piece. Pretend we’ve got a custom metadata type that maps our service interfaces to a service class implementation and a custom user permission (or if you don’t wanna pretend you can check it out here). Let’s first start by creating our new class that extends the fflibApplication.ServiceFactory class and overrides its newInstance method.
/*
@description: This class is an override for the prebuilt fflib_Application.ServiceFactory
that allows
us to dynamically call service classes based on the running users custom permissions.
*/
public with sharing class ServiceFactory extends fflib_Application.ServiceFactory
{
Map<String, Service_By_User_Type__mdt> servicesByUserPermAndInterface = new
Map<String, Service_By_User_Type__mdt>();
public ServiceFactory(Map<Type, Type> serviceInterfaceByServiceImpl){
super(serviceInterfaceByServiceImpl);
this.servicesByUserPermAndInterface = getServicesByUserPermAndInterface();
}
//Overriding the fflib_Application.ServiceFactory newInstance method to allow us to
//initialize a new service implementation type based on the
//running users custom permissions and the interface name passed in.
public override Object newInstance(Type serviceInterfaceType){
for(Service_By_User_Type__mdt serviceByUser:
servicesByUserPermAndInterface.values()){
if(servicesByUserPermAndInterface.containsKey(serviceByUser.User_Permission__c
+ serviceInterfaceType)){
Service_By_User_Type__mdt overrideClass =
servicesByUserPermAndInterface.get(serviceByUser.User_Permission__c +
serviceInterfaceType.getName());
return
Type.forName(overrideClass.Service_Implementation_Class__c).newInstance();
}
}
return super.newInstance(serviceInterfaceType);
}
//Creating our map of overrides by our user custom permissions
private Map<String, Service_By_User_Type__mdt> getServicesByUserPermAndInterface(){
Map<String, Service_By_User_Type__mdt> servicesByUserType =
new Map<String, Service_By_User_Type__mdt>();
for(Service_By_User_Type__mdt serviceByUser:
Service_By_User_Type__mdt.getAll().values()){
//Checking to see if running user has any of the permissions for our
//overrides, if so we put the overrides in a map
if(FeatureManagement.checkPermission(serviceByUser.User_Permission__c)){
servicesByUserType.put(serviceByUser.User_Permission__c +
serviceByUser.Service_Interface__c, serviceByUser);
}
}
return servicesByUserType;
}
}
Cool kewl cool, now that we have our custom ServiceFactory built to manage our overrides based on the running users custom permissions, we can leverage it in the Application Factory class we’ve hopefully built by now like so:
public with sharing class Application
{
//Domain, Selector and UOW factories have been omitted for brevity, but should be added
//to this class
//This allows us to create a factory for instantiating service classes. You send it
//the interface for your service class
//and it will return the correct service layer class
//Exmaple initialization: Object objectService =
//Application.service.newInstance(Task_Service_Interface.class);
public static final fflib_Application.ServiceFactory service =
new ServiceFactory(
new Map<Type, Type>{Task_Service_Interface.class =>
Task_Service_Impl.class});
}
Ok we’ve done the hardest parts now. Next we need to pretend that we are using the service class interface, service implementation class and service class that we already built earlier (just above you, scroll up to those sections and review them if you forgot), because we’ve about to see how a controller would call this task service we’ve built.
public with sharing class Abstract_Task_Creator_Controller
{
@AuraEnabled
public static void createTasks(Id recordId){
Set<Id> recordIds = new Set<Id>{recordId};
Schema.SObjectType objectType = recordId.getSobjectType();
try{
Task_Service.createTasks(recordIds, objectType);
}
catch(Exception e){
throw new AuraHandledException(e.getMessage());
}
}
}
Now you might be wracking your brain right now and being like… ok, so what… but look closer Simba. This controller will literally never grow, neither will your Application class or your ServiceFactory class we’ve built above (well the Application class might, but very little). This Task_Service middle man layer is so abstract you can swap out service implementations on the fly whenever you want and this controller will NEVER NEED TO BE UPDATED (at least not for task service logic)! Basically the only thing that will change at this point is your custom metadata type (object), the custom permissions you map to users and you’ll add more variations of the Task Service Implementation classes throughout time for your various business units that get onboarded and want to use it. However, your controllers (and other places in the code that call the service) will never know the difference. Wyld right. If you’re lost right now lets follow the chain of events step by step in order to clarify some things:
1) Controller calls the Task_Service class’s (the middleman) createTasks() method. 2) Task_Service’s createTasks() method calls its service() method. 3) The service() method uses the Application classes “service” variable, which is an instance of our custom ServiceFactory class (shown above) to create a new instance of our whatever Task Implementation class (which inherits from the Task_Service_Interface class making it of type Task_Service_Interface) is relevant for our users assigned custom permissions by using the newInstance() method the ServiceFactory class overrode. 4) The service variable returns the correct Task Service Implementation for the running user. 5) The createTasks() method is called for whatever Task Service Implementation was determined to be correct for the running user. 6) Tasks are created!
If you’re still shook by all this, please, watch the video where we build all this together step by step and walk through everything. I promise, even if it’s a bit confusing, it’s worth the time to learn.
The Template Method Pattern is one of the more popular Behavioral Design Pattern. The Template Design Pattern basically is creating a genericized skeleton class that a sub class can extend and add functionality to. The genericized skeleton class has some core functionality pre-built, but expects you to fill out (although not explicitly) other overridable methods in your sub class, to actually get much benefit out of it. Most trigger frameworks in existence leverage the Template Method Pattern. In fact there are a lot of frameworks in existence out there that leverage this pattern and I’m not even sure the creators know they leveraged it.
Why is it Useful?
This pattern is extremely useful because it allows you to define the core, generic parts of a class implementation (so it doesn’t need to be re-built over and over), while also allowing different developers the ability to implement their unique logic for their specific implementation. Take for instance a simple trigger handler framework. Most of these use the template method pattern. The core functionality is there (when to run a before insert method or how to handle certain trigger context variables, etc) but the object specific logic methods are overridable. For instance, the methods that determine what to do on the insert of a record, that would be overridden in an extended sub class and then on an object by object basis that logic would be able to differ.
Where does it fit into Separation of Concerns?
This fits into the concept of SoC because this pattern makes sure that you don’t repeat yourself (the DRY principle) and you write the minimal amount of code. Basically it allows you to separate out the generic code from the object specific code that has to be executed. You only write the generic code once and then allow subclasses to extend your template class and implement logic for those empty methods in your template class that need to have object or service specific logic.
Where is it used in the Apex Common Library
This design pattern is leveraged heavily by the fflib_SObjectDomain class in the Apex Common Library.
Example Code (Abstract Task Creation App)
fflib_SObjectDomain class – This class in the Apex Common library uses the template method pattern. Observe the many empty overridable methods (onBeforeInsert, onValidate, onBeforeUpdate, etc). It is expecting that a subclass will extend it and override one or more of those methods to make any true functionality occur.
In most coding languages you need to connect to the database, query for the data and then you create wrapper classes to represent each underlying table in your database(s) to allow you to define how that particular table (object) should behave. Salesforce, however, already does a lot of this for you, for instance there is no need to connect to a Database, declarative behavior for you tables (objects) are already represented and your tables (objects) already have wrapper classes pre-defined for them (Ex: Contact cont = new Contact()).
However the logic represented in a trigger is an exception to this rule. Apex triggers represent a unique scenario on the Salesforce platform, they are necessary for complex logic, but inherently they do not abide by any object oriented principles. You can’t create public methods in them, you can’t unit test them, you can’t re-use logic placed directly in a trigger anywhere else in your system, etc. Which is a massive detriment we need to overcome. That’s where the domain layer comes in to play.
The Domain Layer will allow you on an object by object basis have an object oriented approach to centralize your logic. Basically, logic specific to a single object will be located in one place and only one place by using the domain layer. This ensures your logic specific to a single object isn’t split into a ton of different places across your org.
When to make a new Domain Layer Class
Basically, at the very least, anytime you need to make a trigger on an object you should implement a Domain Class. However this is a bit generalized, sometimes you don’t actually need a trigger on an object, but you have object specific behavior that should be implemented in a Domain class. For instance, if you have an object that doesn’t need a trigger, but it has a very specific way it should have its tasks created, you should probably create a Domain Layer class for that object and put that task creation behavior there.
A domain layer class is essentially a mixture of a trigger handler class and a class that represents object specific behaviors.
Where should you leverage the domain layer in your code?
You should only ever call to the domain layer code from service class methods or from other domain class methods. Controller, Batch Classes, etc should never call out to the domain directly.
Domain Class Naming Conventions
Class Names – Domain classes should be named as the plural of whatever object you are creating a domain layer for. For instance if you were creating a domain layer class for the Case object, the class would be declared as follows: public inherited sharing class Cases. This indicates that the class should be bulkified and handles multiple records, not a single object record.
Class Constructor – The constructor of these classes should always accept a list of records. This list of records will be leveraged by all of the methods within the domain class. This will be further explained below.
Method Names – Method names for database transaction should use the onTransactionName naming convention (Example: onAfterInsert). If the method is not related to a database transaction it should descriptive to indicate what domain logic is being executed within it (Example: determineCaseStatus).
Parameter Names and Types – You do not typically need to pass anything into your domain layer methods. They should primarily operate on the list of records passed in the constructor in the majority of situations. However some behavior based (non-trigger invoked) methods may need other domain objects and/or units of work passed to them. This will be further explained in the sections below.
Domain Layer Best Practices
Trasnaction Management
In the event you are actually performing DML operations in your Domain class, you should either create a Unit of Work or have one passed into the method doing the DML to appropriately manage your transaction. In the event you are not wanting to leverage the unit of work pattern you should make sure to at the very least set your System.Savepoint savePoint = Database.setSavePoint(); prior to doing your DML statement and use a try catch block to rollback if the DML fails.
Implementing the Domain Layer
To find out how to implement the Domain Layer using Apex Common, continue reading here: Implementing the Domain Layer with the Apex Common Library. If you’re not interested in utilizing the Apex Common library for this layer you can implement really any trigger framework and the core of the domain layer will be covered.
The Builder Pattern is a Creational Design Pattern that allows you to construct a complex object one step at a time. Think about the construction of a car or your house or maybe something less complicated, like the construction of a desktop computer. When you’re creating a new desktop computer you have to make a ton of selections to build the computer. You have to get the right cpu, motherboard, etc. Instead of passing all that selection information into a class that constructed the computer, it’d be a lot nicer to build it step by step. Let’s check out some examples:
Non-Builder Pattern Computer Building Class Example:
public class ComputerBuilderController(){
public ComputerCreator createMidTierComputer(){
return new ComputerCreator(midTierGraphicsCard, midTierCPU, midTierMotherboard,
midTierFan, null, null).getComputer();
}
public ComputerCreator createTopTierComputer(){
return new ComputerCreator(topTierGraphicsCard, topTierCPU, topTierMotherboard,
topTierFan, topTierNetworkCard, null).getComputer();
}
}
public class ComputerCreator{
private CPU compCPU;
private GPU compGPU;
private MotherBoard compMotherBoard;
private Fan compFan;
private NetworkCard compNetworkCard;
private Speakers compSpeakers;
//This could go on for forever and you might have 300 different constructors with
//different variations of computer parts... This could become
//an absolutely enormous class.
public ComputerCreator(GraphicsCard selectedGraphicsCard, CPU selectedCPU, MotherBoard
selectedMotherboard, Fan selectedFan, NetworkCard selectedNetworkCard, Speakers
selectedSpeakers){
setComputer(selectedGraphicsCard, selectedCPU, selectedMotherboard, selectedFan,
selectedNetworkCard, selectedSpeakers);
}
//Because of how this is setup, we're setting everything for the computer, even if we're
//just setting the computer parts to null
private void setComputer(GraphicsCard selectedGraphicsCard, CPU selectedCPU, MotherBoard
selectedMotherboard, Fan selectedFan, NetworkCard selectedNetworkCard, Speakers
selectedSpeakers){
this.compGPU= selectedGraphicsCard;
this.compCPU = selectedCPU;
this.compMotherBoard = selectedMotherboard;
this.compFan= selectedFan;
this.compNetworkCard = selectedNetworkCard;
this.compSpeakers = selectedSpeakers;
}
public ComputerCreator getComputer(){
return this;
}
}
You can see in the above example, this setup is not exactly ideal, nor does it lend itself to easy code changes. If you go with the single constructor approach and just allow developers to pass in nulls to the constructor, every time you need to add another option for the computers your code might build, you’ll have to update every piece of code that calls the ComputerCreator class because the constructor will change. Alternatively if you go with new constructor variations for each new option you could end up with hundreds of constructors over time… also not great at all. That can be extremely confusing and difficult to upkeep. So let’s look at how to leverage the builder pattern to achieve the same thing.
Builder Pattern Computer Building Class Example:
public class ComputerBuilderController(){
public ComputerCreator createMidTierComputer(){
return new ComputerCreator().
setCPU(midTierCPU).
setGPU(midTierGPU).
setMotherBoard(midTierMotherBoard).
setFan(midTierFan);
}
public void createTopTierComputer(){
return new ComputerCreator().
setCPU(topTierCPU).
setGPU(topTierGPU).
setMotherBoard(topTierMotherBoard).
setFan(topTierFan).
setNetworkCard(topTierNetworkCard);
}
}
public class ComputerCreator{
private CPU compCPU;
private GPU compGPU;
private MotherBoard compMotherBoard;
private Fan compFan;
private NetworkCard compNetworkCard;
private Speakers compSpeakers;
public ComputerCreator setCPU(CPU selectedCPU){
this.compCPU = selectedCPU;
return this;
}
public ComputerCreator setGPU(GPU selectedGPU){
this.compGPU = selectedGPU;
return this;
}
public ComputerCreator setMotherBoard(MotherBoard selectedMotherBoard){
this.compMotherBoard = selectedMotherBoard;
return this;
}
public ComputerCreator setFan(Fan selectedFan){
this.compFan = selectedFan;
return this;
}
public ComputerCreator setNetworkCard(NetworkCard selectedNetworkCard){
this.compNetworkCard = selectedNetworkCard;
return this;
}
public ComputerCreator setSpeakers(Speaker selectedSpeakers){
this.compSpeakers= selectedSpeakers;
return this;
}
}
You can see in the above example that using the builder pattern here gives us an enormous amount of flexibility. We no longer need to pass null values into a constructor or build a bajillion constructor variations, we only need to call the methods to set each piece of the computer should we need to set them. You can see we now only worry about setting values for things we actually need to set for our computer. Additionally, you can add new options for computer parts to set in the ComputerCreator class easily and it won’t affect that code that has already been written. For instance if I created a setWebcam method it would be no big deal. My createMidTierComputer and createTopTierComputer methods would not be impacted in any way and continue to function just fine. Builder Pattern FTW!
Why is it Useful?
Take the computer example above, without the builder pattern you get one of two things. You either get an enormous constructor you send all your computer parts to (that you will likely pass a ton of nulls to) or you have a ton of constructors to represent different computer variations… either choice is not a great choice. Complex objects typically have potentially hundreds of optional choices you can make, you need something more robust to select those options.
The builder pattern allows you to select those options piece by piece if you want them. Take for instance the computer example again. Desktop computers do not need things like network cards or speakers or a webcam to function. That being said, many people building a computer may want them for one reason or another to make their specific computer useful for them. Instead making constructor variations for every combination of those items, why not just use the builder pattern to add them as needed? It makes the code a whole lot easier to deal with and easier to extend in the future.
Where does it fit into Separation of Concerns?
Builder classes are typically service classes of some sort, maybe you create some super elaborate Opportunities in your org. You might have an Opportunity_Builder_Service or something along those lines. It can help in a lot of areas to reduce your code in the long term and to increase your codes flexibility for allowing new options for the object you are building, and I think we all know (if you’ve been doing this long enough), businesses like to add and subtract things from the services they create on a whim.
Where is it used in the Apex Common Library?
This design pattern is leveraged heavily by the fflib_QueryFactory class in the Apex Common Library. It allows us to build complex SOQL queries by building them step by step.