A JavaScript Crash Course from a Apex Dev's Point of View - Part 1

At TrailheaDX ‘19, I was fortunate enough to do a talk on learning JavaScript fundamentals from an Apex developer’s point of view. This series covers the topics from that talk

I’ve been a developer for about 8 years now and while most of that time has been focused on working on the Salesforce platform, I’ve also had the opportunity to work on Golang, Ruby, and of course, JavaScript. Just like the languages we use to speak with each other, part of successfully using a programming language is to embrace the nuances that are commonly followed by other users of that language. In other words:

Stop developing in JavaScript as if you were writing Apex.

During this series, we’ll cover some topics that I believe provide a strong foundation to getting started with JavaScript by comparing its features to analogous features from Apex.

JavaScript is a Dynamically Typed Language, Apex is Static

In a statically typed language like Apex, variables must be declared with their type and thus those variables can only reference their declared type. Here’s a simple Apex method for example:

Integer increment(Integer x) {
    return x + 1;
}

Because of static typing, the method increment can only accept an Integer as a parameter and can only return an Integer. This provides a level of protection at compile time as there is a stricter contract as to what your method will accept and return, leaving you with less surprises at runtime.

In a dynamically typed language like JavaScript, variable types are defined at runtime. Here’s the same method as a JavaScript function:

function increment(x) {
    return x + 1;
}

The parameter x can be anything! A string, an integer, or some kind of arbitrary object. You also do not necessarily know what it will return just by looking it. You lose that compile time protection, but you also get more flexibility. For instance, dynamic typing makes defining arbitrary data easier. Here’s a function that returns an arbitrary data structure that is declared inline:

function returnData() {
    return {
        a : 'some',
        b : 'random',
        c : {
            d : 'nested data
        }
    }
}

I can’t count the number of times while writing Apex where I just need to process some data and return it in some random structure. Here is the equivalent of that data structure in Apex:

class RandomData {
    String a;
    String b;
    NestedData c;
}

class NestedData {
    String d;
}

class RandomData returnData() {
    RandomData rand = new RandomData();
    NestedData nest = new NestedData();

    rand.a = 'some';
    rand.b = 'random';
    rand.c = nest;
    nest.d = 'nested data';

    return rand;
}

Because Apex is statically typed, you would need to define objects that follow the data structure you want. You get some type safety here but if this was just some private helper method in a class, that type safety is probably outweighed by the extra code that you have to write (and thus maintain!) This is a pretty contrived example, but you have probably seen this when making wrapper classes for SObjects when you wanted to encapsulate an SObject in addition to other data that does not necessarily exist as a field on that SObject. For example:

class OpportunityWrapper() {
    Opportunity opp;
    Boolean shouldDisplay;
}

Sure that’s only four lines, but where should those four lines live? In the controller it is used in? In its own file? What if you have to add more properties? What if those properties aren’t necessarily related? You can see how you can start to spiral out of control, all over some data structure.

The increased flexibility of dynamic typing, however, can lead to some unexpected results. Consider this concatenate function that accepts two parameters and logs their concatenation as a string.

function concatenate(a, b){
    console.log(a + b);
}

If you called function with the strings “hello” and “world”, you’ll see the expected output “helloworld” with no spaces. If you called function with the integers “1” and “2”, you would expect the output “12”, but it would actually output “3”. The function just adds the parameters together and adding two integers together behaves differently from adding two strings together. To fix this, add an empty string to the concatenation so the parameters are coerced into strings. Here’s the fixed version of function:

function concatenate(a, b){
    console.log(a + '' + b);
}

Now passing the integers “1” and “2” to the function will output the expected “12”. So when you are writing JavaScript, make sure to be more defensive when handling variable types to prevent unexpected results.

Scoping

In short, scope refers to visibility of variables within your code. JavaScript and Apex have some differences when handling scope.

Apex is pretty straightforward with block level scope. You can think of a block as anything between a pair of curly braces. Scope flows outward in Apex. When a variable is used in a block, the compiler searches within that block for its definition. If the variable is not defined in there, the compiler searches outside of that block to see if it is a static or instance property in that class. If it still can find it, then it checks the global scope (which contains the definition for tokens like Test and Schema). If that fail, then you get a compilation error.

This outward flow of scope also explains why in Apex you are able to have some level of duplicate variable names without issue. For example, in this Apex method, the variable i is defined twice: once within the blockScope method and once as an instance variable. The blockScope method uses the i that is declared within the block of the method as opposed to the instance variable defined outside of the scope.

private Integer i = 0;

void blockScope() {
    Integer i = 10;
    System.debug(i);
}

blockScope(); //outputs 10;

In JavaScript, scope depends on how you declare your variables. You are probably most familiar with using the var keyword to declare variables, which provide function level scope.

For example, consider this function where the variable greeting is defined within an if block. The greeting variable is still visible outside of the block due to the function level scoping.

function greet() {
    if(true) {
        var greeting = 'hello!';
    }

    console.log(greeting) //Outputs 'hello!'
}

If you forget to use the var keyword, JavaScript will actually put that variable on global scope (i.e. the window object).

function greet() {
    if(true) {
        greeting = 'hello!';
    }

    console.log(greeting) //Outputs 'hello!'
}

Fortunately, if you write you code with use strict, then strict mode will prevent this from happening and will throw an error

function greet() {
    'use strict'
    if(true) {
        greeting = 'hello!'; //Throws "ReferenceError: greeting is not defined in scope"
    }

    console.log(greeting) 
}

Lightning Locker actually enforces strict mode everywhere so you don’t have to specify use strict, but it is important to understand the mechanism behind this scoping.

ES6 introduced the two new keywords let and const for variable declaration which provide block level scope like in Apex. let is used for variables that you want to be able to reassign values to while const variable values cannot be reassigned. In the following function, the variables greeting1 and greeting2 are not accessible outside of the if block;

function greet() {
    'use strict'
    if(true) {
        let greeting1 = 'hello';
        const greeting2 = 'world!';
    }

    console.log(greeting1) //Throws "Uncaught ReferenceError: greeting1 is not defined"
    console.log(greeting2) //Throws "Uncaught ReferenceError: greeting2 is not defined"
}

In general, I tend to default to const if the variable won’t be reassigned, then I use let if I need to change the variable and only using var when I need function level scope, though I don’t run into many scenarios where that is necessary.

These are some very basic differences between JavaScript and Apex. In the next part, we’ll dive deeper by covering first class functions and how that changes the way you approach writing code in JavaScript compared to Apex.

LWC Testing - Mocking registerListener in pubsub

Communication to sibling components in LWC is not included out of the box, but luckily the lwc-recipes GitHub repo contains a pubsub component that can handle this for you. Testing these custom events, however, was a little tricky and I wanted to share how I approached it.

Let’s say you have two components - one that displays data and one that displays a status. When the data is retrieved from the Apex controller, you want to fire an event so that the status component knows to display the status. With pubsub, you can fire an event from the data component and have the status component listen for that event. Now your two components can communicate while also staying loosely coupled. Here’s a simple version of what the JavaScript would look like for each component.

//c-data-component

import { LightningElement, wire } from 'lwc'
import { CurrentPageReference } from 'lightning/navigation'
import getData from '@salesforce/apex/DataController.getData'

import { fireEvent } from 'c/pubsub'

export default class DataComponent extends LightningElement {
    @wire(CurrentPageReference) pageRef;

    fetchData() {
        getData()
            .then(
                function() {
                    fireEvent(this.pageRef, 'myevent')
                }.bind(this)
            );
    }
}
//c-status-component

import { LightningElement, wire, track } from 'lwc'
import { CurrentPageReference } from 'lightning/navigation'

import { fireEvent, registerListener, unregisterAllListeners } from 'c/pubsub'

export default class DataComponent extends LightningElement {
    @wire(CurrentPageReference) pageRef;

    connectedCallback() {
        registerListener('myevent', this.displayStatus, this);
    }

    disconnectedCallBack() {
        unregisterAllListners(this);
    }
    
    displayStatus() {
        this.displayStatus = true;
        this.template.querySelector('.status').classList.remove('slds-hide');
    }
}

//html

<template>
    <div class="status slds-hide">Data Retrieved!></div>
<template>

In summary:

  • When fetchData() is called by c-data-component, it fires the myevent event
  • When c-status-component is initialized, it registers a listener for myevent
  • When myevent is fired, c-status-component should call displayStatus and display the status div

As a part of the c-status-component lwc-jest tests, we should confirm that the listener is registered, and that when the myevent is triggered, the status div is displayed. Testing that the listeners are registered is fairly straightforward, there are a lot of examples of this in lwc-recipes.

import { createElement } from 'lwc';
import StatusComponent from 'c/statusComponent'
import { registerListener, unregisterAllListeners } from 'c/pubsub'

//mock the pubSub methods
jest.mock('c/pubsub', () => {
    return {
        registerListener: jest.fn(),
        unregisterAllListeners: jest.fn()
    }
})

//remove all elements from test DOM after each test
afterEach(() => {
    while (document.body.firstChild) {
        document.body.removeChild(document.body.firstChild);
    }
    jest.clearAllMocks();
})

describe('Listeners', () => {
    it('should register and unregister myevent listener', () => {
        let element = createElement('c-status-component', { is: StatusComponent }
        document.body.appendChild(element);

        expect(registerListener.mock.calls.length).toBe(1);
        expect(registerListener.mock.calls[0][0]).toEqual('myevent');

        document.body.removeChild(element);
        expect(unregisterAllListeners.mock.calls.length).toBe(1)
    })
})

This is pretty boiler plate for any component that you want to use pubsub with. However, I was a little puzzled on how also wanted to test displaying the status. If registerListener is being mocked, then even if I could figure out how to fire this custom event, the mock wouldn’t fire the callback. Luckily, Jest mocks allow you to access the parameters used to call a mocked method.

So I figured a good test would be to intercept the callback parameter, fire it manually in the test, which would adequately simulate the pubsub listener firing the callback.

describe('Listeners', () => {
    ...
    it('should display the status when the myevent is fired', () => {
        //Access the first call of registerListener and get the 2nd parameter, which is the callback
        let myeventCallback = registerListener.mock.calls[0][1];

        //Access the first call of registerListener and get the 3rd parameter
        //which is what `this` should be bound to
        let thisArg = registerListener.mock.calls[0][1];

        let element = createElement('c-status-component', { is: StatusComponent }
        document.body.appendChild(element);

        //fire the callback
        myeventCallback.call(this)

        //return a promise to resolve DOM changes
        return Promise.resolve().then() => {
            const statusDiv = element.shadowRoot.querySelector('.status');
            expect(statusDiv.classList).not.toContain('slds-hide');
        }
    })
})

When document.body.appendChild(element); is called, that will fire connectedCallback, and the mock for registerListener will intercept the parameter this.displayStatus. Now you can access that parameter and call the method with an explicitly set this argument. Hopefully this helps you when writing tests on pubsub events!

Call Me Maybe - Using the Callable Interface to Build Versioned APIs

In Winter ‘19, Salesforce introduced the Callable Interface.

Enables developers to use a common interface to build loosely coupled integrations between Apex classes or triggers, even for code in separate packages. Agreeing upon a common interface enables developers from different companies or different departments to build upon one another’s solutions. Implement this interface to enable the broader community, which might have different solutions than the ones you had in mind, to extend your code’s functionality.

In short, you implement the interface and its single call method, pass it the name of the action you want to call and a Map<String,Object> with any necessary parameters, which dispatches the logic from there.

public class CallMeMaybe implements Callable {
    public Object call(String action, Map<String, Object> args) {
        switch on action {
            when 'doThisThing' {
                service.doThisThing();
            }

            when 'doThatThing' {
                service.doThatThing();
            }
        }
        return null;
    }
}
public class Caller {
    public void callTheCallable() {
        if (Type.forName('namespaced__CallMeMaybe') != null) {
            Callable extension = (Callable) Type.forName('namespaced__CallMeMaybe').newInstance();
            extension.call('doThisThing', new Map<String,Object>());
        }
    }
}

There’s nothing too novel here other than the conveniences this new standard interface gives us, the largest being the ability to execute methods in other packages without having a hard dependency on that package. What jumped out to me, however, was the idea of dispatching actions using a string parameter and how we can use that to build more flexible APIs in managed packages.

Versioned APIs

One way to expose a method for execution in a managed package is to mark it as global. These global methods serve as an API to your package. However, if you ever wanted to adjust the behavior of a global method, you risked causing unintended side affects on subscribers that depend on the original implementation. To get around this, I generally see packages create additional global methods with names like myMethodV2.

The finality of global methods tend me to make me agonize over creating them. Yes, you can deprecate them, but it felt like you were polluting your namespace. myMethodV2 may seem ok, but myMethodV16 starts to feel a little messy. Did you know there are 15 The Land Before Time movies? It’s not a good look.

Instead, what if you created a single Callable entry point into your org as an API?

public class VersionedAPI implements Callable {
    public Object call(String action, Map<String, Object> args) {
        //format actions using the template "domain/version/action"
        //e.g. "courses/v1/create"

        List<String> actionComponents = action.split('/');
        String domain = actionComponents[0];
        String version = actionComponents[1];
        String method = actionComponents[2];

        switch on domain {
            when 'courses' {
                return courseDomain(version, method, args);
            }

            when 'students' {
                return studentDomain(version, method, args);
            }

            ...
        }
        return null;
    }

    public Object courseDomain(String version, String method, Map<String, Object> args) {
        if (version == 'v1') {
            switch on method {
                when 'create' {
                    return courseServiceV1.create();
                }
                ...
            }
        } else if (version == 'v2') {
            switch on method {
                when 'create' {
                    return courseServiceV2.create();
                }
                ...
            }
        }
    }

    ...
}

By following this pattern, you’ll have a little more flexibility in defining your exposed methods without having to worry about the permanence of that method.

  • Typos in your action names aren’t forever anymore!
  • Remove actions that you don’t need. No more ghost town classes filled with @deprecated methods
  • Use new versions to change an actions behavior while allowing your subscribers to update their references at their convenience
  • Experiment with new API actions in a packaged context without fear of them living in the package forever if you change your mind

Of course, with this added flexibility comes the burden of communicating these changes out to your subscribers - if you remove an action, make sure to have a migration plan in place so your subscribers aren’t suddenly faced with a bug that you introduced. By following this pattern, however, I hope it will encourage more developers to expose more functionality as well as foster inter-package testing.