The Apex Common Library is an open source library originally created by Andy Fawcett when he was the CTO of FinancialForce and currently upkept by many community members, but most notably John Daniel. Aside from its origins and the fflib_ in the class names, it is no longer linked to FinancialForce in any way.
The library was originally created because implementing the Separation of Concerns Design Principle is difficult no matter what tech stack you’re working in. For Salesforce, the Apex Common Library was built to simplify the process of implementing Separation of Concerns as well as assist in managing DML transactions, creating high quality unit tests (you need the Apex Mocks library to assist with this) and enforcing coding and security best practices. If you want an exceptionally clean, understandable and flexible code base, the Apex Common library will greatly assist you in those endeavors.
Does The Apex Common Library Implement Separation of Concerns for me Automatically?
Unfortunately it’s not that simple. This library doesn’t just automatically do this for you, no library could, but what it does is give you the tools to easily implement this design principle in your respective Salesforce Org or Managed Package. Though there are many more classes in the Apex Common Library, there are four major classes to familiarize yourself with to be able to implement this, four object oriented programming concepts and three major design patterns. Additionally it’s beneficial if you understand the difference between a Unit Test and an Integration Test. We’ll go over all of these things below.
The Four Major Classes
1) fflib_Application.cls – This Application class acts as a way to easily implement the Factory pattern for building the different layers when running your respective applications within your org (or managed package). When I say “Application” for an org based implementation this could mean a lot of things, but think of it as a grouping of code that represents a specific section of your org. Maybe you have a service desk in your org, that service desk could be represented as an “Application”. This class and the factory pattern are also what makes the Apex Mocks Library work, without implementing it, Apex Mocks will not work.
2) fflib_SObjectDomain.cls – This houses the base class that all Domain classes you create will extend. The many methods within this class serve to make your life considerably easier when building your domain classes, for each object that requires a trigger, out. You can check out my Apex Common Domain Layer Implementation Guide for more details.
3) fflib_SObjectSelector.cls – This houses the base class that all Selector classes you create will extend. The many methods within this class will serve to make your life a ton easier when implementing a selector classes for your various objects in your org. You can check out my Apex Common Selector Layer Implementation Guide
1) Inheritance) – When a class inherits (or extends) another class and the sub class gets access to all of its publicly accessible methods and variables.
2) Polymorphism) – When a class uses overloaded methods or overrides an inherited classes methods.
3) Encapsulation) – Only publishing (or making public) methods and class variables that are needed for other classes to use it.
4) Interfaces) – An interface is a contract between it and a class that implements it to make sure the class has specific method signatures implemented.
What is the Separation of Concerns Design Principle?
Basically separation of concerns is the practice of putting logical boundaries on your code. Putting these logical boundaries on your code helps make your code easier to understand, easier to maintain and much more flexible when it needs to be altered (and every code base ever has to be altered all the time).
In the Salesforce Ecosystem there are three major areas of concern we ideally should separate our code into. They are the following:
The Service Layer:
The Service Layer should house 100% of your non-object specific business logic (object specific logic is most often handled by the domain layer). This is, the logic that is specific to your organizations specific business rules. Say for instance you have a part of your Salesforce App that focuses on Opportunity Sales Projections and the Opportunity Sales Projection App looks at the Oppotunity, Quote, Product and Account objects. You might make an OpportunitySalesProjection_Service apex class that houses methods that have business logic that is specific to your Opportunity Sales Projection App. More information on the Service Layer here.
The Domain Layer:
The Domain Layer houses your individual objects (database tables) trigger logic. It also houses object specific validation logic, logic that should always be applied on the insert of every record for an object and object specific business logic (like how a task my be created for a specific object type, etc). If you used the Account object in your org you should create a Domain class equivalent for the Account object through the use of a trigger handler class of some sort. More information on the Domain Layer here.
The Selector Layer:
The Selector Layer is responsible for querying your objects (database tables) in Salesforce. Selector layer classes should be made for each individual object (or grouping of objects) that you intend to write queries for in your code. The goal of the selector layer is to maintain query consistency (consistency in ordering, common fields queried for, etc) and to be able to reuse common queries easily and not re-write them over and over again everywhere.
Why is it Useful?
There are many benefits to implementing SoC, most of which were outlined above, but here are the highlights:
1) Modularizes your code into easy to understand packages of code making it easier to know what code controls what, why and when.
2) Massively reduces the amount of code in your org by centralizing your logic into different containers. For instance, maybe you currently have 13 different apex controllers that house similar case business logic. If you placed that business logic into a service class and had all 13 apex controllers call that service class instead your life would be a whole lot simpler. This can get a lot more abstract and turn into absolutely unprecedented code reduction, but we have to start somewhere a bit simpler.
3) Separation of Concerns lends itself to writing extremely well done and comprehensive Unit Tests. It allows for easy dependency injection which allows you to, in test classes, mock a classes dependent classes. We’ll go over this more when we get to the Unit testing and Apex Mocks section of this tutuorial, but if you want a quick and easy explanation, please feel free to check out my video covering dependency injection and mocking in apex.
How does the Apex Common Library help with SoC?
The Apex Common Library was quite literally built upon the three layers outlined above. It provides an unrivaled foundation to implement SoC in your Salesforce org. When I started this tutorial series I was not convinced it was the absolute best choice out there, but after hundreds of hours of practice, documentation, experimentation with other similar groupings of libraries, etc I feel I can confidently say (as of today) that this is something the community is lucky even exists and needs to be leveraged much more than it is today.
Example Code
All of the code examples in this repo are examples of SoC in action. You can check the whole repo out here. For layer specific examples check out the layer specific pages of this wiki.
Quality question… I mean honestly wtf is this thing? Lol, sorry, let’s figure it out together. The fflib_Application class is around for two primary purposes. The first is to allow you an extremely abstract way of creating new instances of your unit of work, service layer, domain layer and selector layer in the Apex Common Library through the use of the factory pattern. The second is that implementing this application class is imperative if you want to leverage the Apex Mocks unit testing library. It depends on this Application Factory being implemented.
Most importantly though, if you understand how interfaces, inheritance and polymorphism work implementing this class allows you to write extremely abstract Salesforce implementations, which we’ll discuss more in sections below
Why is this class used?
Ok, if we ignore the fact that this is required for us to use the Apex Mocks library, understanding the power behind this class requires us to take a step back and formulate a real world Salesforce use case for implementing it… hopefully the following one will be easy for everyone to understand.
Say for instance I have a decent sized Salesforce instance and our business has a use case to create tasks across multiple objects and the logic for creating those tasks are unique to every single object. Maybe on the Account object we create three new tasks every single time we create an account and on the Contact object we create two tasks every single time a record is created or updated in a particular way and we ideally want to call this logic on the fly from anywhere in our system.
No matter what we should probably place the task creation logic in our domain layer because it’s relevant to each individual object, but pretend for a second that we have like 20 different objects we need this kind of functionality on. Maybe we need the executed logic in an abstract “task creator” button that can be placed on any lightning app builder page and maybe some overnight batch jobs need to execute the logic too.
Well… what do we do? Let’s just take the abstract “Task Creator” button we might want to place on any object in our system. We could call each individual domain layer class’s task creation logic in the code based on the object we were on (code example below), but that logic tree could get massive and it’s not super ideal.
Task Service example with object logic tree
public with sharing class Task_Service_Impl
{
//This method calls the task creators for each object type
public void createTasks(Set recordIds, Schema.SObjectType objectType)
{
if(objectType == Account.getSObjectType()){
new Accounts().createTasks(recordIds);
}
else if(objectType == Case.getSObjectType()){
new Cases().createTasks(recordIds);
}
else if(objectType == Opportunity.getSObjectType()){
new Opportunities().createTasks(recordIds);
}
else if(objectType == Taco__c.getSObjectType()){
new Tacos().createTasks(recordIds);
}
else if(objectType == Chocolate__c.getSObjectType()){
new Chocolates().createTasks(recordIds);
}
//etc etc for each object could go on for decades
}
}
Maybe… just maybe there’s an easier way. This is where the factory pattern and the fflib_Application class come in handy. Through the use of the factory pattern we can create an abstract Task Service that can (based on a set of records we pass to it) select the right business logic to execute in each domain layer dynamically.
//Creation of the Application factory class
public with sharing class Application
{
public static final fflib_Application.ServiceFactory service =
new fflib_Application.ServiceFactory(
new Map<Type, Type>{
Task_Service_Interface.class => Task_Service_Impl.class}
);
public static final fflib_Application.DomainFactory domain =
new fflib_Application.DomainFactory(
Application.selector,
new Map<SObjectType, Type>{Case.SObjectType => Cases.Constructor.class,
Opportunity.SObjectType => Opportunities.Constructor.class,
Account.SObjectType => Accounts.Constructor.class,
Taco__c.SObjectType => Tacos.Constructor.class,
Chocolate__c.SObjectType => Chocolates.Constructor.class}
);
}
//The task service that anywhere can call and it will operate as expected with super minimal logic
public with sharing class Task_Service_Impl implements Task_Service_Interface
{
//This method calls the task creators for each object type
public void createTasks(Set<Id> recordIds, Schema.SObjectType objectType)
{
fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
if(objectDomain instanceof Task_Creator_Interface){
Task_Creator_Interface taskCreator = (Task_Creator_Interface)objectDomain;
taskCreator.createTasks(recordIds);
}
}
}
You might be lookin at the two code examples right now like wuttttttttt how thooooo?? And I just wanna say, I fully understand that. The first time I saw this implemented I thought the same thing, but it’s a pretty magical thing. Thanks to the newInstance() methods on the fflib_Application class and the Task_Creator_Interface we’ve implemented on the domain classes, you can dynamically generate the correct domain when the code runs and call the create tasks method. Pretty wyld right? Also if you’re thinkin, “Yea that’s kinda nifty Matt, but you had to create this Application class and that’s a bunch of extra code.” you need to step back even farther. This Application factory can be leveraged ANYWHERE IN YOUR ENTIRE CODEBASE! Not just locally in your service class. If you need to implement something similar to automatically generate opportunities or Accounts or something from tons of different objects you can leverage this exact same Application class there. In the long run, this ends up being wayyyyyyyyy less code.
If you want a ton more in depth explanation on this, please watch the tutorial video. We code a live example together so I can explain this concept. It’s certainly not easy to grasp at first glance.
fflib_Application inner classes and methods cheat sheet
Inside the fflib_Application class there are four classes that represent factories for the your unit of work, service layer, domain layer and selector layer.
//The constructor for this class requires you to pass a list of SObject types in the dependency order. So in this instance Accounts would always be inserted before your Contacts and Contacts before Cases, etc.
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
After creating this unit of work variable above ^ in your Application class example here there are four important new instance methods you can leverage to generate a new unit of work:
1) newInstance() – This creates a new instance of the unit of work using the SObjectType list passed in the constructor.
newInstance() Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance();
}
}
newInstance(fflib_SObjectUnitOfWork.IDML dml) Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
//Custom IDML implementation
public with sharing class IDML_Example implements fflib_SObjectUnitOfWork.IDML
{
void dmlInsert(List<SObject> objList){
//custom insert logic here
}
void dmlUpdate(List<SObject> objList){
//custom update logic here
}
void dmlDelete(List<SObject> objList){
//custom delete logic here
}
void eventPublish(List<SObject> objList){
//custom event publishing logic here
}
void emptyRecycleBin(List<SObject> objList){
//custom empty recycle bin logic here
}
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance(new IDML_Example());
}
}
3) newInstance(List <SObjectType> objectTypes) – This creates a new instance of the unit of work and overwrites the SObject type list passed in the constructor so you can have a custom order if you need it.
newInstance(List <SObjectType> objectTypes) Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance(new List<SObjectType>{
Case.SObjectType,
Account.SObjectType,
Task.SObjectType,
Contact.SObjectType,
});
}
}
newInstance(List objectTypes, fflib_SObjectUnitOfWork.IDML dml) Example Method Call
public with sharing class Application
{
public static final fflib_Application.UnitOfWorkFactory UOW =
new fflib_Application.UnitOfWorkFactory(
new List<SObjectType>{
Account.SObjectType,
Contact.SObjectType,
Case.SObjectType,
Task.SObjectType}
);
}
//Custom IDML implementation
public with sharing class IDML_Example implements fflib_SObjectUnitOfWork.IDML
{
void dmlInsert(List<SObject> objList){
//custom insert logic here
}
void dmlUpdate(List<SObject> objList){
//custom update logic here
}
void dmlDelete(List<SObject> objList){
//custom delete logic here
}
void eventPublish(List<SObject> objList){
//custom event publishing logic here
}
void emptyRecycleBin(List<SObject> objList){
//custom empty recycle bin logic here
}
}
public with sharing class SomeClass{
public void someClassMethod(){
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance(new List<SObjectType>{
Case.SObjectType,
Account.SObjectType,
Task.SObjectType,
Contact.SObjectType,
}, new IDML_Example());
}
}
//This allows us to create a factory for instantiating service classes. You send it the interface for your service class
//and it will return the correct service layer class
//Exmaple initialization: Object objectService = Application.service.newInstance(Task_Service_Interface.class);
public static final fflib_Application.ServiceFactory service =
new fflib_Application.ServiceFactory(new Map<Type, Type>{
SObject_SharingService_Interface.class => SObject_SharingService_Impl.class
});
After creating this service variable above ^ in your Application class example here there is one important new instance method you can leverage to generate a new service class instance:
1) newInstance(Type serviceInterfaceType) – This method sends back an instance of your service implementation class based on the interface you send in to it.
newInstance(Type serviceInterfaceType) Example method call:
//This is using the service variable above that we would've created in our Application class
Application.service.newInstance(Task_Service_Interface.class);
//This allows us to create a factory for instantiating selector classes. You send it an object type and it sends
//you the corresponding selectory layer class.
//Example initialization: fflib_ISObjectSelector objectSelector = Application.selector.newInstance(objectType);
public static final fflib_Application.SelectorFactory selector =
new fflib_Application.SelectorFactory(
new Map<SObjectType, Type>{
Case.SObjectType => Case_Selector.class,
Contact.SObjectType => Contact_Selector.class,
Task.SObjectType => Task_Selector.class}
);
After creating this selector variable above ^ in your Application class example here there are three important methods you can leverage to generate a new selector class instance:
1) newInstance(SObjectType sObjectType) – This method will generate a new instance of the selector based on the object type passed to it. So for instance if you have an Opportunity_Selector class and pass Opportunity.SObjectType to the newInstance method you will get back your Opportunity_Selector class (pending you have configured it this way in your Application class map passed to the class.
newInstance(SObjectType sObjectType) Example method call:
//This is using the selector variable above that we would've created in our Application class
Application.selector.newInstance(Case.SObjectType);
2) selectById(Set<Id> recordIds) – This method, based on the ids you pass will automatically call your registered selector layer class for the set of ids object type. It will then call the selectSObjectById method that all Selector classes must implement and return a list of sObjects to you.
selectById(Set<Id> recordIds) Example method call:
//This is using the selector variable above that we would've created in our Application class
Application.selector.selectById(accountIdSet);
3) selectByRelationship(List<sObject> relatedRecords, SObjectField relationshipField) – This method, based on the relatedRecords and the relationship field passed to it will generate a selector layer class for the object type in the relationship field. So say you were querying the Contact object and you wanted an Account Selector class, you could call this method it, pass the list of contacts you queried for and the AccountId field to have an Account Selector returned to you (pending that selector was configured in the Application show above in this wiki article).
selectByRelationship(List<sObject> relatedRecords, SObjectField relationshipField) Example method call:
//This is using the selector variable above that we would've created in our Application class
Application.selector.selectByRelationship(contactList, Contact.AccountId);
//This allows you to create a factory for instantiating domain classes. You can send it a set of record ids and
//you'll get the corresponding domain layer.
//Example initialization: fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
public static final fflib_Application.DomainFactory domain =
new fflib_Application.DomainFactory(
Application.selector,
new Map<SObjectType, Type>{Case.SObjectType => Cases.Constructor.class,
Contact.SObjectType => Contacts.Constructor.class}
);
After creating this domain variable above ^ in your Application class example here there are three important methods you can leverage to generate a new domain class instance:
1) newInstance(Set <Id> recordIds) – This method creates a new instance of your domain class based off the object type in the set of ids you pass it.
newInstance(Set<Id> recordIds) Example method call:
Application.domain.newInstance(accountIdSet);
2) newInstance(List<sObject> records) – This method creates a new instance of your domain class based off the object type in the list of records you pass it.
newInstance(List<sObject> records) Example method call:
In every factory class inside the fflib_Application class there is a setMock method. These methods are used to pass in mock/fake versions of your classes for unit testing purposes. Make sure to leverage this method if you are planning to do unit testing. Leveraging this method eliminates the need to use dependency injection in your classes to allow for mocking. There are examples of how to leverage this method in the Implementing Mock Unit Testing with Apex Mocks section of this wiki.
A Unit of Work, “Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems”.
The goal of the unit of work pattern is to simplify DML in your code and only commit changes to the database/objects when it’s truly time to commit. Considering the many limits around DML in Salesforce, it’s important to employ this pattern in your org in some way. It’s also important to note that this, “maintains a list of objects affected by a business transaction”, which indicates that the UOW pattern should be prevalent in your service layer (The service layer houses business logic).
The UOW pattern also ensures we don’t have data inconsistencies in our Salesforce instance. It does this by only committing work when all the DML operations complete successfully. It rolls back our transactions when any DML fails in our unit of work.
Benefits of the using the Unit of Work Pattern in Salesforce
There are several, but here are the biggest of them all… massive amounts of code reduction, having consistency with your DML transactions, doing the minimal DML statements feasible (bulkification) and DML mocking in unit tests. Let’s figure out how we reduce the code and make it more consistent first.
The Code Reduction and Consistency
Think about all the places in your codebase where you insert records, error handle the inserting of your records and manage the transactional state of your records (Savepoints). Maybe if your org is new there’s not a ton happening yet, but as it grows the amount of code dealing with that can become enormous and, even worse, inconsistent. I’ve worked in 12 year old orgs that had 8000+ lines of code just dedicated to inserting records throughout the system and with every dev who wrote the code a new variety of transaction management took place, different error handling (or none at all), etc.
Code Bulkification
The unit of work pattern also helps a great deal with code bulkification. It encourages you to to finish creating and modifying 100% of your records in your transaction prior to actually committing them (doing the dml transactions) to the database (objects). It makes sure that you are doing that absolute minimal transactions necessary to be successful. For instance, maybe for some reason in your code you are updating cases in one method, and when you’re done you call another method and it updates those same cases… why do that? You could register all those updates and update all those cases at once with one DML statement. Whether you realize it at the time or not, even dml statement counts… use them sparingly.
DML Mocking for Unit Tests
If you’re not sure what mocking and unit test are, then definitely check out my section on that in the wiki here. Basically, in an ideal scenario you would like to do unit testing, but unit testing depends on you having the ability to mock classes for you tests (basically creating fake versions of your class you have complete control over in your tests). Creating this layer that handles your dml transactions allows you to mock that layer in your classes when doing unit tests… If this is confusing, no worries, we’ll discuss it a bunch more later in the last three sections of this wiki.
It is a foundation built to allow you to leverage the unit of work design pattern from within Salesforce. Basically this class is designed to hold your database operations (insert, update, etc) in memory until you are ready to do all of your database transactions in one big transaction. It also handles savepoint rollbacks to ensure data consistentcy. For instance, if you are inserting Opportunities with Quotes in the same database (DML) transaction, chances are you don’t wanna insert those Opportunities if your Quotes fail to insert. The unit of work class is setup to automatically handle that transaction management and roll back if anything fails.
If also follows bulkification best practices to make your life even easier dealing with DML transactions.
Why is this class used?
This class is utilized so that you can have super fine control over your database transactions and so that you only do DML transactions when every single record is prepped and ready to be inserted, updated, etc.
Additionally there are two reasons it is important to leverage this class (or a class like it): 1) To allow for DML mocking in your test classes. 2) To massively reduce duplicate code for DML transactions in your org. 3) To make DML transaction management consistent
Think about those last two for a second… how many lines of code in your org insert, update, upsert (etc) records in your org? Then think about how much code also error handles those transaction and (if you’re doing things right) how much code goes into savepoint rollbacks. That all adds up over time to a ton of code. This class houses it all in one centralized apex class. You’ll never have to re-write all that logic again.
How to Register a Callback method for an Apex Commons UOW
The following code example shows you how to setup a callback method for your units of work using the fflib_SObjectUnitOfWork.IDoWork interface, should you need them.
public inherited sharing class HelpDeskAppPostCommitLogic implements fflib_SObjectUnitOfWork.IDoWork{
List<Task> taskList;
public HelpDeskAppPostCommitLogic(List<Task> taskList){
this.taskList = taskList;
}
public void doWork(){
//write callback code here
}
}
The code below shows you how to actually make sure your unit of work calls your callback method.
fflib_ISObjectUnitOfWork uow = Helpdesk_Application.helpDeskUOW.newInstance();
//code to create some tasks
uow.registerNew(newTasks);
uow.registerWork(new HelpDeskAppPostCommitLogic(newTasks));
uow.commitWork();
Apex Commons Unit of Work Limitations
1) Records within the same object that have lookups to each other are currently not supported. For example, if the Account object has a Lookup to itself, that relationship cannot be registered.
2) You cannot do all or none false database transactions without creating a custom IDML implementation.
Database.insert(acctList, false);
3) To send emails with the Apex Commons UOW you must utilize the special registerEmail method.
4) It does not manage FLS and CRUD without implementing a custom class that implements the IDML interface and does that for you.
How and When to use the fflib_SObjectUnitOfWork IDML Interface
If your unit of work needs a custom implementation for inserting, updating, deleting, etc that is not supported by the SimpleDML inner class then you are gonna want to create a new class that implements the fflib_SObjectUnitOfWork.IDML interface. After you create that class if you were using the Application factory you would instantiate your unit of work like so Application.uow.newInstance(new customIDMLClass()); otherwise you would initialize it using public static fflib_SObjectUnitOfWork uow = new fflib_SObjectUnitOfWork(new List<SObjectType>{Case.SObjectType}, new customIDMLClass());. A CUSTOM IDML CLASS IS SUPER IMPORTANT IF YOU WANT TO MANAGE CRUD AND FLS!!! THE fflib_SObjectUnitOfWork class does not do that for you! So let’s check out an example of how to implement a custom IDML class together below.
Example of an IDML Class
//Implementing this class allows you to overcome to limitations of the regular unit of work class.
public with sharing class IDML_Example implements fflib_SObjectUnitOfWork.IDML
{
public void dmlInsert(List<SObject> objList){
//custom insert logic here
}
public void dmlUpdate(List<SObject> objList){
//custom update logic here
}
public void dmlDelete(List<SObject> objList){
//custom delete logic here
}
public void eventPublish(List<SObject> objList){
//custom event publishing logic here
}
public void emptyRecycleBin(List<SObject> objList){
//custom empty recycle bin logic here
}
}
fflib_SObjectUnitOfWork class method cheat sheet
This does not encompass all methods in the fflib_SObjectUnitOfWork class, however it does cover the most commonly used methods. There are also methods in this class to publish platform events should you need them but they aren’t covered below.
The Service Layer, “Defines an application’s boundaries with a layer of services that establishes a set of available operations and coordinates the application’s response in each operation”. – Martin Fowler
This essentially just means that the service layer should house your business logic. It should be a centralized place that holds code that represents business logic for each object (database table) or the service layer logic for a custom built app in your org (more common when building managed packages).
Difference between the Service Layer and Domain Layer – People seem to often confuse this layer with the Domain layer. The Domain layer is only for object specific default operations (triggers, validations, updates that should always execute on a database transaction, etc). The Service layer is for business logic for major modules/applications in your org. Sometimes that module is represented by an object, sometimes it is represented by a grouping of objects. Domain layer logic is specific to each individual object whereas services often are not.
Service Layer Naming Conventions
Class Names – Your service classes should be named after the area of the application your services represent. Typically services classes are created for important objects or applications within your org.
Service Class Name Examples (Note that I prefer underscores in class names, this is just personal preference):
Account_Service
DocumentGenerationApp_Service
Method Names – The public method names should be the names of the business operations they represent. The method names should reflect what the end users of your system would refer to the business operation as. Service layer methods should also ideally always be static.
Method Parameter Types and Naming – The method parameters in public methods for the service layer should typically only accept collections (Map, Set, List) as the majority of service layer methods should be bulkified (there are some scenarios however that warrant non-collection types). The parameters should be named something that reflects the data they represent.
Service Class Method Names and Parameter Examples:
public static void calculateOpportunityProfits(List<Account> accountsToCalculate)
public static void generateWordDocument(Map<String, SObject> sObjectByName)
Service Layer Security
Service Layer Security Enforcement – Service layers hold business logic so by default they should at minimum use inherited sharing when declaring the classes, however I would suggest always using with sharing and allowing developers to elevate the code to run without sharing when necessary by using a private inner class.
Example Security for a Service Layer Class:
public with sharing class Account_Service{
public static void calculateOpportunityProfits(List<Account> accountsToCalculate){
//code here
new Account_Service_WithoutSharing().calculateOpportunityProfits_WithoutSharing(accountsToCalculate);
}
private without sharing class Account_Service_WithoutSharing{
public void calculateOpportunityProfits_WithoutSharing(List<Account> accountsToCalculate){
//code here
}
}
}
Service Layer Code Best Practices
Keeping the code as flexible as possible
You should make sure that the code in the service layer does not expect the data passed to it to be in any particular format. For instance, if the service layer code is expecting a List of Accounts that has a certain set of fields filled out, your service method has just become very fragile. What if the service needs an additional field on that list of accounts to be filled out in the future to do its job? Then you have to refactor all the places building lists of data to send to that service layer method.
Instead you could pass in a set of Account Ids, have the service method query for all the fields it actually requires itself, and then return the appropriate data. This will make your service layer methods much more flexible.
Transaction Management
Your service layer method should handle transaction management (either with the unit of work pattern or otherwise) by making sure to leverage Database.setSavePoint() and using try catch blocks to rollback when the execution fails.
Transaction management example
public static void calculateOpportunityProfits(Set<Id> accountIdsToCalculate){
List<Account> accountsToCalculate = [SELECT Id FROM Account WHERE Id IN : accountIdsToCalculate];
System.Savepoint savePoint = Database.setSavePoint();
try{
database.insert(accountsToCalculate);
}
catch(Exception e){
Database.rollback(savePoint);
throw e;
}
}
Compound Services
Sometimes code needs to call more than one method in the service layer of your code. In this case instead of calling both service layer methods from your calling code like in the below example, you would ideally want to create a compound service method in your service layer.
The reason the above code is detrimental is that you would either have one of two side effects. The transaction management would only be separately by each method and one could fail and the other could complete successfully, despite the fact we don’t actually want that to happen. Alternatively you could handle transaction management in the class calling the service layer, which isn’t ideal either.
Instead we should create a new method in the service layer that combines those methods and handles the transaction management in a cleaner manner.
To find out how to implement the Service Layer using the Apex Common Library, continue reading here: Implementing the Service Layer with the Apex Common Library . If you’re not interested in utilizing the Apex Common Library, no worries, there are really no frameworks to implement a Service Layer (to my knowledge) because this is literally just a business logic layer so every single orgs service layer will be different. The only thing Apex Common assists with here is abstracting the service layer to assist with Unit Test mocking and to make your service class instantiations more dynamic.
Libraries That Could Be Used for the Service Layer
None to my knowledge although the Apex Common Library provides a good foundation for abstracting your service layers to assist with mocking and more dynamic class instantiations.
Service Layer Examples
Apex Common Example (Suggested)
All three of the below classes are tied together. We’ll go over how this works in the next section.
There is NO FRAMEWORK that can be made for service layer classes. This is a business logic layer and it will differ everywhere. No two businesses are identical. That being said, if you would like to leverage all of the other benefits of the Apex Common Library (primarily Apex Mocks) and you would like your service classes to be able to leverage the fflib_Application class to allow for dynamic runtime logic generation, you’ll need to structure your classes as outlined below. If you don’t want to leverage these things, then don’t worry about doing what is listed below… but trust me, in the long run it will likely be worth it as your org grows in size.
The Service Interface
For every service layer class you create you will create an interface (or potentially a virtual class you can extend) that your service layer implementation class will implement (more on that below). This interface will have every method in your class represented in it. An example of a service interface is below. Some people like to prefix their interfaces with the letter I (example: ICaseService), however I prefer to postfix it with _I or _Interface as it’s a bit clearer in my opinion.
This methods in this interface should represent all of the public methods you plan to create for this service class. Private methods should not be represented here.
public interface Task_Service_Interface
{
void createTasks(Set<Id> recordIds, Schema.SObjectType objectType);
}
The Service Layer Class
This class is where things get a little confusing in my opinion, but here’s the gist of it. This is the class you will actually call in your apex controllers (or occasionally domain classes) to actually execute the code… however there are no real implementation details in it (that exists in the implementation class outlined below). The reason this class sits in as a kind of middle man is because we want, no matter what business logic is actually called at run time, for our controller classes, batch classes, domain classes, etc to not need to alter the class they call to get the work done. In the Service Factory section below we’ll see how that becomes a huge factor. Below is an example of the Service Layer class setup.
//This class is what every calling class will actually call to. For more information on the //Application class check out the fflib_Application class
//part of this wiki.
public with sharing class Task_Service
{
//This literally just calls the Task_Service_Impl class's createTasks method
global static void createTasks(Set<Id> recordIds, Schema.SObjectType objectType){
service().createTasks(recordIds, objectType);
}
//This gets an instance of the Task_Service_Impl class from our Application class.
//This method exists for ease of use in the other methods
//in this class
private static Task_Service_Interface service(){
return (Task_Service_Interface)
Application.service.newInstance(Task_Service_Interface.class);
}
}
The Service Implementation Class
This is the concrete business logic implementation. This is effectively the code that isn’t super abstract, but is the more custom built business logic specific to the specific business (or business unit) that needs it to be executed. Basically, this is where your actual business logic should reside. Now, again, you may be asking, but Matt… why not just create a new instance of this class and just use it? Why create some silly interface and some middle man class to call this class. This isn’t gonna be superrrrrrr simple to wrap your head around, but bear with me. In the next section we tie all these classes together and paint the bigger picture. An example of a Service Implementation class is below.
/**
* @description This is the true implementation of your business logic for your service layer.
These impl classes
* are where all the magic happens. In this case this is a service class that executes the
business logic for Abstract
* Task creation on any theoretical object.
*/
public with sharing class Task_Service_Impl implements Task_Service_Interface
{
//This method creates tasks and MUST BE IMPLEMENTED since we are implementing the
//Task_Service_Interface
public void createTasks(Set<Id> recordIds, Schema.SObjectType objectType)
{
//Getting a new instance of a domain class based purely on the ids of our
//records, if these were case
//ids it would return a Case object domain class, if they were contacts it
//would return a contact
//object domain class
fflib_ISObjectDomain objectDomain = Application.domain.newInstance(recordIds);
//Getting a new instance of our selector class based purely on the object type
//passed. If we passed in a case
//object type we would get a case selector, a contact object type a contact
//selector, etc.
fflib_ISObjectSelector objectSelector =
Application.selector.newInstance(objectType);
//We're creating a new unit of work instance from our Application class.
fflib_ISObjectUnitOfWork unitOfWork = Application.UOW.newInstance();
//List to hold our records that need tasks created for them
List<SObject> objectsThatNeedTasks = new List<SObject>();
//If our selector class is an instance of Task_Selector_Interface (if it
//implement the Task_Selector_Interface
//interface) call the selectRecordsForTasks() method in the class. Otherwise
//just call the selectSObjectsById method
if(objectSelector instanceof Task_Selector_Interface){
Task_Selector_Interface taskFieldSelector =
(Task_Selector_Interface)objectSelector;
objectsThatNeedTasks = taskFieldSelector.selectRecordsForTasks();
}
else{
objectsThatNeedTasks = objectSelector.selectSObjectsById(recordIds);
}
//If our domain class is an instance of the Task_Creator_Interface (or
//implements the Task_Creator_Interface class)
//call the createTasks method
if(objectDomain instanceof Task_Creator_Interface){
Task_Creator_Interface taskCreator =
(Task_Creator_Interface)objectDomain;
taskCreator.createTasks(objectsThatNeedTasks, unitOfWork);
}
//Try commiting the records we've created and/or updated in our unit of work
//(we're basically doing all our DML at
//once here), else throw an exception.
try{
unitOfWork.commitWork();
}
catch(Exception e){
throw e;
}
}
}
The fflib_Application.ServiceFactory class
The fflib_Application.ServiceFactory class… what is it and how does it fit in here. Well, if you read through all of Part 4: The fflib_Application Class then you hopefully have some solid background on what it’s used for and why, but it’s a little trickier to conceptualize for the service class so let’s go over it a bit again. Basically it leverages The Factory Pattern to dynamically generate the correct code implementations at run time (when your code is actually running).
This is awesome for tons of stuff, but it’s especially awesome for the service layer. Why? You’ll notice as your Salesforce instance grows so do the amount of interested parties. All of the sudden you’ve gone from one or two business units to 25 different business units and what happens when those businesses need the same type of functionality with differing logic? You could make tons of if else statements determining what the user type is and then calling different methods based on that users type… but maybe there’s an easier way. If you are an ISV (a managed package provider) what I’m about to show you is likely 1000 times more important for you. If your product grows and people start adopting it, you absolutely need a way to allow flexibility in your applications business logic, maybe even allow them to write their own logic and have a way for your code to execute it??
Let’s check out how allllllllllll these pieces come together below.
Tying all the classes together
Alright, let’s tie everything together piece by piece. Pretend we’ve got a custom metadata type that maps our service interfaces to a service class implementation and a custom user permission (or if you don’t wanna pretend you can check it out here). Let’s first start by creating our new class that extends the fflibApplication.ServiceFactory class and overrides its newInstance method.
/*
@description: This class is an override for the prebuilt fflib_Application.ServiceFactory
that allows
us to dynamically call service classes based on the running users custom permissions.
*/
public with sharing class ServiceFactory extends fflib_Application.ServiceFactory
{
Map<String, Service_By_User_Type__mdt> servicesByUserPermAndInterface = new
Map<String, Service_By_User_Type__mdt>();
public ServiceFactory(Map<Type, Type> serviceInterfaceByServiceImpl){
super(serviceInterfaceByServiceImpl);
this.servicesByUserPermAndInterface = getServicesByUserPermAndInterface();
}
//Overriding the fflib_Application.ServiceFactory newInstance method to allow us to
//initialize a new service implementation type based on the
//running users custom permissions and the interface name passed in.
public override Object newInstance(Type serviceInterfaceType){
for(Service_By_User_Type__mdt serviceByUser:
servicesByUserPermAndInterface.values()){
if(servicesByUserPermAndInterface.containsKey(serviceByUser.User_Permission__c
+ serviceInterfaceType)){
Service_By_User_Type__mdt overrideClass =
servicesByUserPermAndInterface.get(serviceByUser.User_Permission__c +
serviceInterfaceType.getName());
return
Type.forName(overrideClass.Service_Implementation_Class__c).newInstance();
}
}
return super.newInstance(serviceInterfaceType);
}
//Creating our map of overrides by our user custom permissions
private Map<String, Service_By_User_Type__mdt> getServicesByUserPermAndInterface(){
Map<String, Service_By_User_Type__mdt> servicesByUserType =
new Map<String, Service_By_User_Type__mdt>();
for(Service_By_User_Type__mdt serviceByUser:
Service_By_User_Type__mdt.getAll().values()){
//Checking to see if running user has any of the permissions for our
//overrides, if so we put the overrides in a map
if(FeatureManagement.checkPermission(serviceByUser.User_Permission__c)){
servicesByUserType.put(serviceByUser.User_Permission__c +
serviceByUser.Service_Interface__c, serviceByUser);
}
}
return servicesByUserType;
}
}
Cool kewl cool, now that we have our custom ServiceFactory built to manage our overrides based on the running users custom permissions, we can leverage it in the Application Factory class we’ve hopefully built by now like so:
public with sharing class Application
{
//Domain, Selector and UOW factories have been omitted for brevity, but should be added
//to this class
//This allows us to create a factory for instantiating service classes. You send it
//the interface for your service class
//and it will return the correct service layer class
//Exmaple initialization: Object objectService =
//Application.service.newInstance(Task_Service_Interface.class);
public static final fflib_Application.ServiceFactory service =
new ServiceFactory(
new Map<Type, Type>{Task_Service_Interface.class =>
Task_Service_Impl.class});
}
Ok we’ve done the hardest parts now. Next we need to pretend that we are using the service class interface, service implementation class and service class that we already built earlier (just above you, scroll up to those sections and review them if you forgot), because we’ve about to see how a controller would call this task service we’ve built.
public with sharing class Abstract_Task_Creator_Controller
{
@AuraEnabled
public static void createTasks(Id recordId){
Set<Id> recordIds = new Set<Id>{recordId};
Schema.SObjectType objectType = recordId.getSobjectType();
try{
Task_Service.createTasks(recordIds, objectType);
}
catch(Exception e){
throw new AuraHandledException(e.getMessage());
}
}
}
Now you might be wracking your brain right now and being like… ok, so what… but look closer Simba. This controller will literally never grow, neither will your Application class or your ServiceFactory class we’ve built above (well the Application class might, but very little). This Task_Service middle man layer is so abstract you can swap out service implementations on the fly whenever you want and this controller will NEVER NEED TO BE UPDATED (at least not for task service logic)! Basically the only thing that will change at this point is your custom metadata type (object), the custom permissions you map to users and you’ll add more variations of the Task Service Implementation classes throughout time for your various business units that get onboarded and want to use it. However, your controllers (and other places in the code that call the service) will never know the difference. Wyld right. If you’re lost right now lets follow the chain of events step by step in order to clarify some things:
1) Controller calls the Task_Service class’s (the middleman) createTasks() method. 2) Task_Service’s createTasks() method calls its service() method. 3) The service() method uses the Application classes “service” variable, which is an instance of our custom ServiceFactory class (shown above) to create a new instance of our whatever Task Implementation class (which inherits from the Task_Service_Interface class making it of type Task_Service_Interface) is relevant for our users assigned custom permissions by using the newInstance() method the ServiceFactory class overrode. 4) The service variable returns the correct Task Service Implementation for the running user. 5) The createTasks() method is called for whatever Task Service Implementation was determined to be correct for the running user. 6) Tasks are created!
If you’re still shook by all this, please, watch the video where we build all this together step by step and walk through everything. I promise, even if it’s a bit confusing, it’s worth the time to learn.
Why Create an LWC that can Generate Word Documents?
This is a little more self explanatory than many of the blog posts I do, but let’s go over some things. You typically wanna create this because the business has a need (for one reason or another) to generate a word doc. I’ve had businesses need them so important people could sign off on something with a hand written signature, needed guest list printed for campaigns/events and several other scenarios.
As far as why we should use an LWC to do this instead of a VF Page or Aura Component, Aura Components are considerably slower and I would just suggest not making them anymore in general and VF Pages suffer from view state limitations. While it’s easier to deal with them when working with external libraries because of lightning locker service, it’s easy to generate a document with images that blows past the 170kb view state limit and then crashes your page.
The docx.js Javascript Library
To generate word documents, we need to use the docx.js javascript library, which thankfully, is locker service compliant! Saves us a lot of time (if you didn’t know you can modify most libraries to make them compliant). You can get the docxjs code we’re gonna be using for this tutorial here .
This library basically allows you to generate word documents using javascript. It makes your life doing this a thousand times easier, so make sure to thank the devs that built it!
Writing the Code
The code we’re gonna write to get this done is just for a simple example. We’re gonna generate a list of contacts associated with an account in a word document. Before we get started, all this code is up on my GitHub here, so if you wanna just ignore this whole section and check out the GitHub repo, feel free, otherwise, please carry on, lol. So first things first, open up VSCode and create a new lightning web component! If you aren’t familiar with how to setup VSCode, I have a video covering it here!
Once you’ve got your new LWC created in VSCode, we need to upload the docxjs code to static resources so that we can use it in our LWC. You can grab the docxjs code here. Then navigate to static resources in setup and upload the docxjs code there. Make sure to make the static resource public!
After that’s done, switch back over to VSCode and let’s import the docxjs file into the LWC by using the code below:
import { LightningElement} from 'lwc';
import {loadScript} from "lightning/platformResourceLoader";
import docxImport from "@salesforce/resourceUrl/docx";
export default class Contact_list_generator extends LightningElement {
connectedCallback(){
Promise.all([loadScript(this, docxImport)]).then(() =>{
//call some code here
});
}
}
You may be looking at the above like, “wtf is that bro?” so let me explain. The connectedCallback method is called when your LWC is loaded into the browser, so it’s kinda like the init method in Aura components. Promise.all is just saying, “Hey, I promise to wait until all the scripts are loaded, then I’m gonna execute the code inside this code block”. The loadScript is a module that Salesforce provides to you that allows you to load in resources to your LWC from static resources. Last, but certainly not least, the, “import docxImport from “@salesforce/resourceUrl/docx”;” is the actual reference to your docx static resource file. The docx at the end of the of that line should be whatever you actually named your static resource.
In the HTML above we’re basically just creating two buttons, one to generate a word document and one to download that word document. You’ll notice there are two references in the HTML to js variables/methods that don’t exist yet (startDocumentGeneration and downloadURL) so let’s get back to the LWC js controller and figure this thing out.
The next thing we need to add is a way to render the generate document button after the component loads the docxjs script. We can do that with the following code
In our connected callback method we’re gonna call a method called renderButtons that changes the visibility of our buttons after our scripts are loaded. The “this.template.querySelector(“.hidden”).classList.remove(“hidden”);” is removing the class that was hiding the component and allowing it to be viewed and clickable. We do need to actually add the css though. So let’s add the css below to the component
.hidden{
display: none;
}
Basically that css just allows you to hide an element… pretty simple. Not much there.
The next thing we need to do is create the startDocumentGeneration method and actually grab our contact data and build the document. So let’s look at the rest of the controller code we need to build out below.
import { LightningElement, api } from 'lwc';
import {loadScript} from "lightning/platformResourceLoader";
import docxImport from "@salesforce/resourceUrl/docx";
import contactGrab from "@salesforce/apex/ContactGrabber.getAllRelatedContacts";
export default class Contact_list_generator extends LightningElement {
@api recordId;
downloadURL;
_no_border = {top: {style: "none", size: 0, color: "FFFFFF"},
bottom: {style: "none", size: 0, color: "FFFFFF"},
left: {style: "none", size: 0, color: "FFFFFF"},
right: {style: "none", size: 0, color: "FFFFFF"}};
connectedCallback(){
Promise.all([loadScript(this, docxImport)]).then(() =>{
this.renderButtons();
});
}
renderButtons(){
//this.template.querySelector(".hidden").classList.add("not_hidden");
this.template.querySelector(".hidden").classList.remove("hidden");
}
startDocumentGeneration(){
contactGrab({'acctId': this.recordId}).then(contacts=>{
this.buildDocument(contacts);
});
}
buildDocument(contactsPassed){
let document = new docx.Document();
let tableCells = [];
tableCells.push(this.generateHeaderRow());
contactsPassed.forEach(contact => {
tableCells.push(this.generateRow(contact));
});
this.generateTable(document, tableCells);
this.generateDownloadLink(document);
}
generateHeaderRow(){
let tableHeaderRow = new docx.TableRow({
children:[
new docx.TableCell({
children: [new docx.Paragraph("First Name")],
borders: this._no_border
}),
new docx.TableCell({
children: [new docx.Paragraph("Last Name")],
borders: this._no_border
})
]
});
return tableHeaderRow;
}
generateRow(contactPassed){
let tableRow = new docx.TableRow({
children: [
new docx.TableCell({
children: [new docx.Paragraph({children: [this.generateTextRun(contactPassed["FirstName"].toString())]})],
borders: this._no_border
}),
new docx.TableCell({
children: [new docx.Paragraph({children: [this.generateTextRun(contactPassed["LastName"].toString())]})],
borders: this._no_border
})
]
});
return tableRow;
}
generateTextRun(cellString){
let textRun = new docx.TextRun({text: cellString, bold: true, size: 48, font: "Calibri"});
return textRun;
}
generateTable(documentPassed, tableCellsPassed){
let docTable = new docx.Table({
rows: tableCellsPassed
});
documentPassed.addSection({
children: [docTable]
});
}
generateDownloadLink(documentPassed){
docx.Packer.toBase64String(documentPassed).then(textBlob =>{
this.downloadURL = 'data:application/vnd.openxmlformats-officedocument.wordprocessingml.document;base64,' + textBlob;
this.template.querySelector(".slds-hide").classList.remove("slds-hide");
});
}
}
So, there’s a bit to cover here, lol, so let’s start with the call in to the apex controller to get our contacts. This line of code here:
This calls to the apex controller and retrieves a list of contacts based on the account id of the record we’re currently on. If you didn’t know, the @api recordId variable declaration at the top of the class just dynamically pulls in the id of the record your component is being viewed on! Super convenient! We are also able to call our apex method using the contactGrab({‘acctId’: this.recordId}) statement because we imported our apex class at the top of the LWC here “import contactGrab from “@salesforce/apex/ContactGrabber.getAllRelatedContacts”;”. That being said we haven’t looked at the apex code yet, so let’s check it out… there’s not much there but it’s still important.
public with sharing class ContactGrabber {
@AuraEnabled
public static List<Contact> getAllRelatedContacts(Id acctId){
return [SELECT Id, FirstName, LastName FROM Contact WHERE AccountId = :acctId];
}
}
The @AuraEnabled declaration allows us to import this method in our class to the LWC. It’s import to do that, so don’t forget!
Now that we have our contacts we actually need to generate the document. So let’s get to it bruh! The buildDocument method starts this process so let’s check it our first.
buildDocument(contactsPassed){
let document = new docx.Document();
let tableRows = [];
tableRows.push(this.generateHeaderRow());
contactsPassed.forEach(contact => {
tableRows.push(this.generateRow(contact));
});
this.generateTable(document, tableRows);
this.generateDownloadLink(document);
}
In the code above we’re declaring a new docx Document object with the “new docx.Dcoument()” declaration. After this we create an array of table cells (because in this example we are building a table of contacts in a word document). We then proceed to push a table row into the table rows array by calling the generateHeaderRow method in our js controller. Let’s check out that class next.
generateHeaderRow(){
let tableHeaderRow = new docx.TableRow({
children:[
new docx.TableCell({
children: [new docx.Paragraph("First Name")],
borders: this._no_border
}),
new docx.TableCell({
children: [new docx.Paragraph("Last Name")],
borders: this._no_border
})
]
});
return tableHeaderRow;
}
The generateHeaderRow method use the docx.TableRow object, the docx.TableCell object and the docx.Paragraph object to generate a table row with two cells. One cell for the contacts first name and another cell for a contacts last name. It then returns this table row.
Let’s get back to the buildDocument method now. The next thing that happens is we iterate through the list of contacts that we pulled from our apex controller and generate a table row for each contact by calling the generateRow method and push that into our tableRows array. So let’s look at the generateRow method next.
generateRow(contactPassed){
let tableRow = new docx.TableRow({
children: [
new docx.TableCell({
children: [new docx.Paragraph({children: [this.generateTextRun(contactPassed["FirstName"].toString())]})],
borders: this._no_border
}),
new docx.TableCell({
children: [new docx.Paragraph({children: [this.generateTextRun(contactPassed["LastName"].toString())]})],
borders: this._no_border
})
]
});
return tableRow;
}
This code does something similar to the generateHeaderRow method, the only difference between the two is that I call the generateTextRun method instead of just outright declaring a new docx Paragraph object. The docx.TextRun object allows us to specify traits in our text. Things like font size, font type, whether the text is bold and a ton more. Let’s check out the generateTextRun method to see what it’s doing.
generateTextRun(cellString){
let textRun = new docx.TextRun({text: cellString, bold: true, size: 48, font: "Calibri"});
return textRun;
}
In the method above we are generating what docxjs calls a text run and then returning it. It’s pretty simple as you can see. I’m just declaring the traits I want for my text. Nothing more, nothing less.
Back to the buildDocument method then! The next thing we do is call the generateTable method and pass is our docx.Document object along with our array of tableRows. Let’s check out that method next!
generateTable(documentPassed, tableCellsPassed){
let docTable = new docx.Table({
rows: tableCellsPassed
});
documentPassed.addSection({
children: [docTable]
});
}
In this method we are creating a new docx.Table and assigning the array of rows we passed to this method to the rows parameter of the docx.Table. We then proceed to add a new section to our docx.Document and put the table in that section. This actually adds the table to the document we are creating.
Now, one last time, let’s check out the buildDocument method again. The last thing we do in it is call the generateDownloadLink method. So let’s take a look at that method now.
What this method does, is take the document we built and create a url that will allow us to download the word doc. It also turns on the download button in our LWC. We generate a base64 encoded string using the docx.Packer object and assign it to the downloadURL.
And believe it or not, that’s it! Yea!!! You’ve just figured out how to build your own LWC that can produce word documents. You can build off this base to do whatever you think you might wanna do. You could, with the help of other js libraries build a whole document templating app. I’ve done it in the past, it’s challenging, but doable! Good luck building whatever it is you’re building with this!
Get Coding With The Force Merch!!
We now have a redbubble store setup so you can buy cool Coding With The Force merchandise! Please check it out! Every purchase goes to supporting the blog and YouTube channel.
In short, it’s gonna save you a bunch of time, code and unnecessary configuration, especially when you are authenticating using OAuth. Named credentials basically simplify the authentication portion of your callouts to to external services and allow you do it declaratively through configuration. No matter how hardcode a dev you are, they are 100% worth your time and effort to learn how to use. I promise.
How do you setup a named credential?
You traverse to Setup -> Named Credentials to setup the named credential of your choosing. Named Credentials allow you to authenticate via the vast majority of the authentication methods used by external service providers. You will likely even be able to connect to your internal data bases via named credentials as well if you need to. I’m not gonna go over them all individually in this article. In the video above I got over three different Named credential types and how to configure them. If you’re interested in that portion, please check it out!
How do we reference named credentials in the code?
This literally could not be easier. In fact it’s so simple I think it confuses the hell out of some people, lol. I will give you a simple example below that connects to GitHub via OAuth:
public class GithubOAuthCallout {
public static void callGitHub(){
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:GitHub_OAuth/users/Coding-With-The-Force/repos');
req.setMethod('GET');
req.setHeader('Accept', 'application/json');
req.setHeader('Content-Type', 'application/json');
Http http = new Http();
HTTPResponse res = http.send(req);
System.debug(res.getBody());
}
}
There are a couple of important things to point out in the code above:
1) When we are setting the endpoint for the HttpRequest we are add the value ‘callout:GitHub_OAuth’ this is how we reference our Named Credential. When you are setting your endpoints for your HttpRequests you pass in your Named Credential by using the following format: callout:[The name of your named credential].
2) If you’ve ever requested data using OAuth authentication you know that we seem to be missing a few steps… We’re not calling out to any authorization endpoints or getting an access token anywhere in the above code. We’re also not setting an authorization header parameter. THAT’S BECAUSE SALESFORCE DOES IT ALL FOR YOU AUTOMATICALLY! Yes… you read that right, automatically, no need to write that code yourself. That ‘callout:GitHub_OAuth’ is doing a ton of behind the scenes magic. It gets that OAuth token for you and automatically sets the authorization header parameter with that token. So wyld right?
Hopefully just that simple example above makes you think twice about choosing to not use named credentials… and if it doesn’t, you probably haven’t done many integrations with external systems yet and don’t realize how much time this saves. IT SAVES A TON OF TIME, CONFIGURATION AND CODE! Trust me on this one. I promise I’m not selling you garbage here. It’s worth using 100% of the time.
Get Coding With The Force Merch!!
We now have a redbubble store setup so you can buy cool Coding With The Force merchandise! Please check it out! Every purchase goes to supporting the blog and YouTube channel.
Aside from the fact that it’s both magical and amazing, it also enables you to easily do a couple things:
1) If you get onboarded to a project with a horrific looking codebase with nightmarish formatting, you can fix that by running a single command on the entire thing to fix it. This saves you what would otherwise take dozens if not hundreds of hours of time
2) If you are on a large team of developers and you want your code to all look the same (at least on your commit to your codebase), you can create one code style configuration file and distribute it to the entire team for use. This way your code all looks and reads the same which makes everything just a little bit nicer to deal with.
There are other reasons but those are the two major ones I can come up with off the top of my head here, lol.
Why use Uncrustify for auto-code formatting?
I like Uncrustify for automatic code formatting because it has so many configuration options, 735 in fact! It allows you to configure how you like your code to look as opposed to many other popular auto-code formatting tools that are “opinionated” that don’t give you that flexibility.
How to Setup Uncrustify
To utilize Uncrustify with your Salesforce projects in Visual Studio Code you need to do the following (Github wiki is here):
1) Install Uncrustify locally on your machine. I would suggest installing it via a package manager like chocolatey or npm if you are using a windows machine (you may need to download npm or chocolatey if you haven’t already)
3) In Visual Studio Code press Ctrl + Shift + P to bring up the command palette and then type the following command: Preferences: Open Settings (JSON). This should bring up a JSON file with your VS Code settings.
4) Inside the settings.json file you opened in the last step, add the following lines:
6) Press Ctrl + Shift + P to bring up the command palette again and enter the following command: Uncrustify: Create Default Config. After running this command you should see an uncrustify.cfg file in your VS Code project
7) Open the uncrustify.cfg file and update all your settings in it (This is just a nice UI with a ton of different options, so nothing complicated here). Make sure to hit the save button in the top right corner when you are done!
8) Once you finished open a file you would like to have code formatting done to and press Alt+Shift+F
And that’s it boiz and gurlz! Enjoy your auto-formatted code!
Get Coding With The Force Merch!!
We now have a redbubble store setup so you can buy cool Coding With The Force merchandise! Please check it out! Every purchase goes to supporting the blog and YouTube channel.