JavaScript Best Practices
JavaScript is undoubtedly the most used programming language in the world and is hugely influential on one of the largest technologies we have all come to rely on – the internet. With this power comes great responsibility, and the JavaScript ecosystem has been rapidly evolving, making it incredibly hard to keep up with the latest JavaScript best practices.
In this blog post, we will cover several key best practices in modern JavaScript for cleaner, more maintainable, and more performant code.
Project-defined practices trump all other practices
The project where you write code can have its own strict rules. Project rules are more relevant than any suggestion from any best practice article – this one included! If you would like to use a specific practice, make sure to sync it with the project rules and codebase and that everyone on your team is also on board.
Use up-to-date JavaScript
JavaScript was invented on December 4, 1995. Since that time, it has been almost endlessly evolving. On the Internet, you can find a lot of outdated suggestions and practices. Be careful and verify if a practice that you would like to use is up to date.
Also, be careful when using the very latest JavaScript features. It is better to start using new JavaScript features that have been through at least Ecma TC39 Stage 3.
That said, here is a compilation of some of the current common best practices for JavaScript all in one place:
Declaring variables
You may encounter code that uses many var declarations. This may be on purpose, but if it is old code, it could be because this was the old approach.
Advice: Use let
and const
instead of var
to declare your variables.
Why it matters: Although var
is still available, let
and const
provide block-scoping, which is more predictable and reduces unexpected errors that can happen when declaring variables with var
, which is function-scoped.
for (let j = 1; j < 5; j++) { console.log(j); } console.log(j); // you get 'Uncaught ReferenceError: j is not defined' //If we did this using var: for (var j = 1; j < 5; j++) { console.log(j); } // <-- logs the numbers 1 to 4 console.log(j); //You’d get 5 as it still exists outside the loop
Classes instead of Function: prototype
In many old codebases or articles about OOP in JavaScript, you may run into the function prototype approach for emulation of classes. For example:
function Person(name) { this.name = name; } Person.prototype.getName = function () { return this.name; } const p = new Person('A'); console.log(p.getName()); // 'A'
Advice: This approach uses constructors to control the prototype chain. However, in such cases, using classes is almost always better.
class Person { constructor(name) { this.name = name; } getName() { return this.name; } } const p = new Person('A'); console.log(p.getName()); // 'A'
Why it matters: The main reason to use classes is that they have much cleaner syntax.
Private class fields
In older JavaScript code, it was common to use an underscore (_
) as a convention to denote private properties or methods in classes. However, this doesn’t actually enforce privacy – it just serves as a signal to developers that something is meant to be private.
class Person { constructor(name) { this._name = name; // Conventionally treated as private, but not truly private } getName() { return this._name; } } const p = new Person('A'); console.log(p.getName()); // 'A' console.log(p._name); // 'A' (still accessible from outside)
Advice: When you really need private fields in classes, JavaScript now has real private fields using the #
syntax. This is an official language feature that enforces true privacy.
class Person { #name constructor(name) { this.#name = name; } getName() { return this.#name } } const p = new Person('A'); console.log(p.getName()); // 'A'
Why it matters: Using real private fields ensures that the data is truly encapsulated, preventing accidental or malicious access from outside the class. The underscore convention only provides a visual cue and can easily be misused, while #
private fields guarantee privacy by design. This results in more robust and maintainable code.
Arrow function expressions
Arrow functions are often used to make callback functions or anonymous functions more concise and readable. They are especially useful when working with higher-order functions like map
, filter
, or reduce
.
const numbers = [1, 2]; // Using arrow function numbers.map(num => num * 2); // Instead of numbers.map(function (num) { return num * 2; });
Advice: Arrow functions provide a more concise syntax, especially when the function body is a single expression. They also automatically bind the this
context, which can be particularly helpful in class methods where this can easily get lost.
Consider this example with a class:
class Person { name = 'A'; // Arrow function retains the 'this' context getName = () => this.name; } const getName = new Person().getName; console.log(getName()); // 'A'
Why it matters: Arrow functions enhance readability by removing boilerplate code, making callback functions and inline expressions much more concise. In addition, they are particularly valuable when working with classes or event handlers, as they automatically bind this
to the surrounding lexical scope. This avoids common bugs related to this
in traditional function expressions, especially in asynchronous or callback-heavy code.
Nullish coalescing (??)
In JavaScript, developers have often used the logical OR (||
) operator to assign default values when a variable is undefined
or null
. However, this can behave unexpectedly when the variable holds values like 0
, false
, or an empty string (""
), because ||
treats them as falsy and substitutes the default value.
For example:
const value = 0; const result = value || 10; console.log(result); // 10 (unexpected if 0 is a valid value)
Advice: Use the nullish coalescing operator (??
) instead of ||
when resolving default values. It only checks for null
or undefined
, leaving other falsy values (like 0
, false
, ""
) intact.
const value = 0; const result = value ?? 10; console.log(result); // 0 (expected behavior)
Why it matters: The ??
operator provides a more precise way of handling default values in cases where null
or undefined
should trigger the fallback. It prevents errors caused by using ||
, which may unintentionally override valid falsy values. Using nullish coalescing results in more predictable behavior, improving both code clarity and reliability.
Optional chaining (?.
):
When dealing with deeply nested objects or arrays, it’s common to have to check whether each property or array element exists before trying to access the next level. Without optional chaining, this requires verbose and repetitive code.
For example:
const product = {}; // Without optional chaining const tax = (product.price && product.price.tax) ?? undefined;
Advice: The optional chaining operator (?.) simplifies this process by automatically checking if a property or method exists before attempting to access it. If any part of the chain is null or undefined, it will return undefined rather than throwing an error.
const product = {}; // Using optional chaining const tax = product?.price?.tax;
Why it matters: Optional chaining reduces the amount of boilerplate code and makes it easier to work with deeply nested structures. It ensures your code is cleaner and less error-prone by handling null
or undefined
values gracefully, without the need for multiple checks. This leads to more readable and maintainable code, especially when working with dynamic data or complex objects.
async/await
In older JavaScript, handling asynchronous operations often relied on callbacks or chaining promises, which could quickly lead to complex, hard-to-read code. For example, using .then()
for promise chaining could make the flow harder to follow, especially with multiple asynchronous operations:
function fetchData() { return fetch('https://api.example.com/data') .then(response => response.json()) .then(data => { console.log(data); }) .catch(error => { console.error(error); }); }
Advice: Use async
and await
to make your asynchronous code look more like regular, synchronous code. This improves readability and makes error handling easier with try...catch
.
async function fetchData() { try { const response = await fetch('https://api.example.com/data'); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } }
Why it matters: The async/await
syntax simplifies working with asynchronous operations by removing the need for chaining .then()
and .catch()
. It makes your code more readable, more maintainable, and easier to follow, especially when dealing with multiple async calls. Error handling is also more straightforward with try...catch
, leading to cleaner and more predictable logic.
Interaction with object keys and values
In older JavaScript code, interacting with the keys and values of an object often involved manual looping with for...in
or Object.keys()
, followed by accessing values through bracket notation or dot notation. This can lead to verbose and less intuitive code.
const obj = { a: 1, b: 2, c: 3 }; // Older approach with Object.keys() Object.keys(obj).forEach(key => { console.log(key, obj[key]); });
Advice: Use modern methods such as Object.entries()
, Object.values()
, and Object.keys()
for working with object keys and values. These methods simplify the process and return useful structures like arrays, making your code more concise and easier to work with.
const obj = { a: 1, b: 2, c: 3 }; // Using Object.entries() to iterate over key-value pairs Object.entries(obj).forEach(([key, value]) => { console.log(key, value); }); // Using Object.values() to work directly with values Object.values(obj).forEach(value => { console.log(value); });
Why it matters: Using modern object methods such as Object.entries()
, Object.values()
, and Object.keys()
results in cleaner, more readable code. These methods reduce the amount of boilerplate needed for iterating over objects and improve code clarity, especially when dealing with complex or dynamic data structures. They also support easier transformations of objects into other forms (e.g. arrays), making data manipulation more flexible and efficient.
Check the array type of a variable
In the past, developers used various non-straightforward methods to check if a variable was an array. These included approaches like checking the constructor or using instanceof
, but they were often unreliable, especially when dealing with different execution contexts (like iframes
).
const arr = [1, 2, 3]; // Older approach console.log(arr instanceof Array); // true, but not always reliable across different contexts
Advice: Use the modern Array.isArray()
method, which provides a simple and reliable way to check whether a variable is an array. This method works consistently across different environments and execution contexts.
const arr = [1, 2, 3]; console.log(Array.isArray(arr)); // true
Why it matters: Array.isArray()
is a clear, readable, and reliable way to check for arrays. It eliminates the need for verbose or error-prone methods like instanceof
, ensuring your code handles array detection correctly, even in complex or cross-environment scenarios. This leads to fewer bugs and more predictable behavior when working with different types of data structures.
Map
In earlier JavaScript, developers often used plain objects to map keys to values. However, this approach has limitations, especially when keys are not strings or symbols. Plain objects can only use strings or symbols as keys, so if you need to map non-primitive objects (like arrays or other objects) to values, it becomes cumbersome and error-prone.
const obj = {}; const key = { id: 1 }; // Trying to use a non-primitive object as a key obj[key] = 'value'; console.log(obj); // Object automatically converts key to a string: '[object Object]: value'
Advice: Use Map
when you need to map non-primitive objects or when a more robust data structure is required. Unlike plain objects, Map
allows any type of value – primitives and non-primitives alike – as keys.
const map = new Map(); const key = { id: 1 }; // Using a non-primitive object as a key in a Map map.set(key, 'value'); console.log(map.get(key)); // 'value'
Why it matters: Map
is a more flexible and predictable way of associating values with any kind of key, whether primitive or non-primitive. It preserves the type and order of keys, unlike plain objects, which convert keys to strings. This leads to more powerful and efficient handling of key-value pairs, especially when working with complex data or when you need fast lookups in larger collections.
Symbols for hidden values
In JavaScript, objects are typically used to store key-value pairs. However, when you need to add “hidden” or unique values to an object without risking name collisions with other properties, or you want to keep them somewhat private from external code, using Symbol
can be very helpful. Symbols create unique keys that are not accessible via normal enumeration or accidental property lookup.
const obj = { name: 'Alice' }; const hiddenKey = Symbol('hidden'); obj[hiddenKey] = 'Secret Value'; console.log(obj.name); // 'Alice' console.log(obj[hiddenKey]); // 'Secret Value'
Advice: Use Symbol
when you want to add non-enumerable, hidden properties to an object. Symbols are not accessible during typical object operations like for...in
loops or Object.keys()
, making them perfect for internal or private data that shouldn’t be exposed accidentally.
const obj = { name: 'Alice' }; const hiddenKey = Symbol('hidden'); obj[hiddenKey] = 'Secret Value'; console.log(Object.keys(obj)); // ['name'] (Symbol keys won't appear) console.log(Object.getOwnPropertySymbols(obj)); // [Symbol(hidden)] (accessible only if specifically retrieved)
Why it matters: Symbols allow you to safely add unique and “hidden” properties to objects without worrying about key collisions or exposing internal details to other parts of the codebase. They can be especially useful in libraries or frameworks where you might need to store metadata or internal states without affecting or interfering with other properties. This ensures better encapsulation and reduces the risk of accidental overwrites or misuse.
Check Intl
API before using extra formatting libraries
In the past, developers often relied on third-party libraries for tasks like formatting dates, numbers, and currencies to suit different locales. While these libraries provide powerful functionality, they add extra weight to your project and may duplicate features already built into JavaScript.
// Using a library for currency formatting const amount = 123456.78; // formatLibrary.formatCurrency(amount, 'USD');
Advice: Before reaching for an external library, consider using the built-in ECMAScript Internationalization API (Intl
). It provides robust out-of-the-box functionality for formatting dates, numbers, currencies, and more based on locale. This can often cover most of your internationalization and localization needs without the extra overhead of third-party libraries.
const amount = 123456.78; // Using Intl.NumberFormat for currency formatting const formatter = new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD' }); console.log(formatter.format(amount)); // $123,456.78
You can also use it for dates:
const date = new Date(); const dateFormatter = new Intl.DateTimeFormat('en-GB', { year: 'numeric', month: 'long', day: 'numeric' }); console.log(dateFormatter.format(date)); // "15 October 2024"
Why it matters: The Intl
API provides native and highly optimized support for internationalization, making it unnecessary to import large libraries for simple formatting needs. By using built-in features, you can keep your project lightweight, reduce dependencies, and still offer comprehensive locale-based formatting solutions. This not only improves performance but also reduces the maintenance burden associated with third-party libraries.
Common practices
Now, let’s look at some common practices that should be best practices.
Use strict equality (===
) if possible
One of the trickiest and most surprising behaviors in JavaScript comes from the loose equality operator (==
). It performs type coercion, which means it tries to convert operands to the same type before comparing them. This can lead to strange and unexpected results, as demonstrated in the famous “WTFJS” cases from talks like Brian Leroux’s:
console.log([] == ![]); // true (this is surprising!)
In this case, the loose equality operator (==
) converts both sides in unexpected ways, leading to unintuitive results.
Advice: Whenever possible, use strict equality (===
) instead of loose equality (==
). Strict equality does not perform type coercion – it compares both value and type directly, which leads to more predictable and reliable behavior.
console.log([] === ![]); // false (as expected)
Here’s a more typical example to highlight the difference:
// Loose equality (==) performs type coercion console.log(0 == ''); // true // Strict equality (===) compares both value and type console.log(0 === ''); // false (as expected)
Why it matters: Using strict equality (===
) helps avoid the unexpected behavior that comes with type coercion in JavaScript. It makes comparisons more predictable and reduces the risk of subtle bugs, especially when dealing with different data types like numbers, strings, or booleans. It’s a good practice to default to ===
unless you have a specific reason to use loose equality and understand the implications.
Explicitly handle expressions in if
statements:
In JavaScript, the if statement implicitly converts the result of the expression it evaluates into a “truthy” or “falsy” value. This means that values like 0
, ""
(empty string), null
, undefined
, and false
are all treated as falsy, while most other values (even things like []
or {}
) are truthy. This implicit casting can sometimes lead to unexpected results if you’re not careful.
For example:
const value = 0; if (value) { console.log('This will not run because 0 is falsy.'); }
Advice: It’s a good practice to make the conditions in if statements explicit, especially when the values you’re working with might not behave as expected in a truthy/falsy evaluation. This makes the code more predictable and easier to understand.
For instance, instead of relying on implicit type coercion:
const value = 0; // Implicit check (may behave unexpectedly for some values) if (value) { console.log('This won’t run'); }
You can use explicit conditions:
// Explicitly check for the type or value you expect if (value !== 0) { console.log('This will run only if value is not 0.'); }
Or, when checking for null
or undefined
:
const name = null; if (name != null) { // Explicitly checking for null or undefined console.log('Name is defined'); } else { console.log('Name is null or undefined'); }
Why it matters: By explicitly defining the conditions in your if
statements, you reduce the chances of unexpected behavior from JavaScript’s automatic type coercion. This makes your code clearer and helps prevent bugs when working with potentially ambiguous values like 0
, false
, null
, or ""
. It’s a good practice to be explicit about what conditions you’re checking for, especially in complex logic.
Don’t use built-in Number
for sensitive calculations
JavaScript’s built-in Number
type is a floating-point number based on the IEEE 754 standard. While this is efficient for most purposes, it can lead to surprising inaccuracies, particularly with decimal arithmetic. This is not a problem specific to JavaScript, but it can cause serious issues when you’re working with sensitive data such as financial calculations.
For example, you might encounter this famous floating-point problem:
console.log(0.1 + 0.2); // 0.30000000000000004
Advice: When precision is critical – such as in financial calculations – avoid using the standard Number
type for arithmetic. Instead, use specialized libraries like decimal.js
or big.js
that are designed to handle precise decimal calculations without floating-point errors.
Here’s how it works with a library like decimal.js
:
const Decimal = require('decimal.js'); const result = new Decimal(0.1).plus(0.2); console.log(result.toString()); // '0.3'
These libraries ensure that the calculations are precise and that rounding errors won’t impact the result, making them ideal for sensitive tasks like handling money.
Why it matters: Inaccurate calculations can lead to serious issues when working with things like financial data, where even tiny discrepancies matter. JavaScript’s floating-point math can produce unexpected results, and while improvements are being made to the language, for now, it’s best to rely on libraries like decimal.js
or big.js
to ensure precision. By using these libraries, you avoid common pitfalls and ensure that your calculations are accurate, trustworthy, and suitable for critical applications.
Be careful with JSON and big integers
JavaScript has limits when it comes to handling very large numbers. The maximum safe integer in JavaScript is 9007199254740991
(also known as Number.MAX_SAFE_INTEGER
). Numbers larger than this may lose precision and produce incorrect results. This becomes a problem when working with APIs or systems outside of JavaScript, where big numbers – such as database id
fields – can easily exceed JavaScript’s safe range.
For example, when parsing JSON with a large number:
console.log( JSON.parse('{"id": 9007199254740999}') ); // Output: { id: 9007199254741000 } (Precision lost)
Advice: To avoid this precision issue when dealing with large numbers from JSON data, you can use the reviver
parameter of JSON.parse()
. This allows you to manually handle specific values – like id
fields – and preserve them in a safe format, such as strings.
console.log( JSON.parse( '{"id": 9007199254740999}', (key, value, ctx) => { if (key === 'id') { return ctx.source; // Preserve the original value as a string } return value; } ) ); // Output: { id: '9007199254740999' }
Using BigInt: JavaScript introduced BigInt
to safely work with numbers larger than Number.MAX_SAFE_INTEGER
. However, BigInt cannot be directly serialized to JSON. If you attempt to stringify an object containing BigInt
, you’ll get a TypeError
:
const data = { id: 9007199254740999n }; try { JSON.stringify(data); } catch (e) { console.log(e.message); // 'Do not know how to serialize a BigInt' }
To handle this, use the replacer parameter in JSON.stringify()
to convert BigInt
values to strings:
const data = { id: 9007199254740999n }; console.log( JSON.stringify(data, (key, value) => { if (typeof value === 'bigint') { return value.toString() + 'n'; // Append 'n' to denote BigInt } return value; }) ); // Output: {"id":"9007199254740999n"}
⚠️ Important consideration: If you use these techniques for handling large integers with JSON, ensure that both the client and server sides of your application agree on how to serialize and deserialize the data. For example, if the server sends an id
as a string or BigInt
with a specific format, the client must be prepared to handle that format during deserialization.
Why it matters: JavaScript’s number precision limits can lead to serious bugs when working with large numbers from external systems. By using techniques like BigInt
and the reviver/replacer
parameters of JSON.parse()
and JSON.stringify()
, you can ensure that large integers are handled correctly, avoiding data corruption. This is especially important in cases where precision is crucial, such as dealing with large ids or financial transactions.
Use JSDoc for helping code readers and editors
When working with JavaScript, functions and object signatures often lack documentation, making it harder for other developers (or even your future self) to understand what parameters and objects contain or how a function should be used. Without proper documentation, code can be ambiguous, especially if object structures aren’t clear:
For example:
const printFullUserName = user => // Does user have the `middleName` or `surName`? `${user.firstName} ${user.lastName}`;
In this case, without any documentation, it’s not immediately clear what properties the user object should have. Does user
contain middleName
? Should surName
be used instead of lastName
?
Advice: By using JSDoc, you can define the expected structure of objects, function parameters, and return types. This makes it easier for code readers to understand the function and also helps code editors provide better autocompletion, type checking, and tooltips.
Here’s how you can improve the previous example with JSDoc:
/** * @typedef {Object} User * @property {string} firstName * @property {string} [middleName] // Optional property * @property {string} lastName */ /** * Prints the full name of a user. * @param {User} user - The user object containing name details. * @return {string} - The full name of the user. */ const printFullUserName = user => `${user.firstName} ${user.middleName ? user.middleName + ' ' : ''}${user.lastName}`;
Why it matters: JSDoc improves the readability and maintainability of your code by clearly indicating what types of values are expected in functions or objects. It also enhances the developer experience by enabling autocompletion and type-checking in many editors and IDEs, such as Visual Studio Code or WebStorm. This reduces the likelihood of bugs and makes it easier for new developers to onboard and understand the code.
With JSDoc, editors can provide hints, autocompletion for object properties, and even warn developers when they misuse a function or provide the wrong parameter type, making your code both more understandable and robust.
Use tests
As your codebase grows, manually verifying that new changes don’t break important functionality becomes time-consuming and error-prone. Automated testing helps ensure that your code works as expected and allows you to make changes with confidence.
In the JavaScript ecosystem, there are many testing frameworks available, but as of Node.js version 20, you no longer need an external framework to start writing and running tests. Node.js now includes a built-in stable test runner.
Here’s a simple example using Node.js’s built-in test runner:
import { test } from 'node:test'; import { equal } from 'node:assert'; // A simple function to test const sum = (a, b) => a + b; // Writing a test for the sum function test('sum', () => { equal(sum(1, 1), 2); // Should return true if 1 + 1 equals 2 });
You can run this test with the following command:
node --test
This built-in solution simplifies the process of writing and running tests in Node.js environments. You no longer need to configure or install additional tools like Jest or Mocha, though those options are still great for larger projects.
E2E testing in browsers: For end-to-end (E2E) testing in browsers, Playwright is an excellent tool that allows you to easily automate and test interactions within the browser. With Playwright, you can test user flows, simulate interactions across multiple browsers (such as Chrome, Firefox, and Safari), and ensure that your app behaves as expected from the user’s perspective.
Other environments: Bun and Deno, two alternative JavaScript runtimes, also provide built-in test runners similar to Node.js, making it easy to write and run tests without extra setup.
Why it matters: Writing tests saves time in the long run by catching bugs early and reducing the need for manual testing after every change. It also gives you confidence that the new features or refactoring won’t introduce regressions. The fact that modern runtimes like Node.js, Bun, and Deno include built-in test runners means you can start writing tests right away, with minimal setup. Testing tools like Playwright help ensure your application works seamlessly in real-world browser environments, adding an extra layer of assurance for critical user interactions.
Final Thoughts
Though this may seem like a lot to take in. Hopefully, it has given you insight into some areas that you haven’t otherwise considered and would like to implement into your JavaScript projects. Again, feel free to bookmark this and come back to it anytime you need to refer to it. JavaScript conventions are constantly changing and evolving, as are the frameworks. Keeping up with the latest tools and best practices will continuously improve and optimize your code, but it can be difficult to do. We’d recommend following along with what is going on in the ECMAScript releases, as this often points to new conventions that are then generally adopted in the latest JavaScript code. TC39 usually has proposals for the latest ECMAScript versions, which you can follow along with too.
By embracing these modern JavaScript best practices, you’re not just writing code that works – you’re creating cleaner, more efficient, and more maintainable solutions. Whether it’s using newer syntax like async/await
, avoiding pitfalls with floating-point numbers, or leveraging the powerful Intl
API, these practices will help you stay up-to-date and confident in your codebase. As the JavaScript ecosystem continues to evolve, taking the time to adopt best practices now will save you from future headaches and set you up for long-term success.
That’s it for today! We hope that this has been useful – the comments are open for questions, discussions, and sharing advice. Happy coding!