Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Tue, 30 Jul 2024 17:31:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 225069128 Patterns for Memory Efficient DOM Manipulation with Modern Vanilla JavaScript https://frontendmasters.com/blog/patterns-for-memory-efficient-dom-manipulation/ https://frontendmasters.com/blog/patterns-for-memory-efficient-dom-manipulation/#comments Mon, 29 Jul 2024 12:47:16 +0000 https://frontendmasters.com/blog/?p=1551 Let’s continue the modern vanilla JavaScript series!

Memory Efficient DOM Manipulation

Article Series

I’ll discuss best practices to avoid excess memory usage when managing updating the DOM to make your apps blazingly fast™️.

DOM: Document Object Model – A Brief Overview

When you render HTML, the live view of those rendered elements in the browser is called the DOM. This is what you’ll see in your developer tools “Elements” inspector:

Elements panel in chrome dev tools

It’s essentially a tree, with each element inside of it being a leaf. There is an entire set of APIs specifically dealing with modifying this tree of elements.

Here’s a quick list of common DOM APIs:

  • querySelector()
  • querySelectorAll()
  • createElement()
  • getAttribute()
  • setAttribute()
  • addEventListener()
  • appendChild()

These are attached to the document, so you use them like const el = document.querySelector("#el");. They are also available on all other elements, so if you have an element reference you can use these methods and their abilities are scoped to that element.

const nav = document.querySelector("#site-nav");
const navLinks = nav.querySelectorAll("a");

These methods will be available in the browser to modify the DOM, but they won’t be available in server JavaScript (like Node.js) unless you use a DOM emulator like js-dom.

As an industry, we’ve offloaded most of this direct rendering to frameworks. All JavaScript frameworks (React, Angular, Vue, Svelte, etc) use these APIs under the hood. While I recognize that the productivity benefits of frameworks often outweigh the potential performance gains of manual DOM manipulation, I want to demystify what goes on under the hood in this article.

Why manipulate the DOM yourself in the first place?

The main reason is performance. Frameworks can add unnecessary data structures and re-renders leading to the dreaded stuttering / freezing behavior seen in many modern web apps. This is due to the Garbage Collector being put on overdrive having to handle all that code.

The downside is it is more code to handle DOM manipulation yourself. It can get complicated, which is why it a better developer experience to use frameworks and abstractions around the DOM rather than manipulating the DOM manually. Regardless, there are cases where you may need the extra performance. That is what this guide is for.

VS Code is Built on Manual DOM Manipulation

Visual Studio Code is one of those such cases. VS Code is written in vanilla JavaScript “to be as close to the DOM as possible.” Projects as large as VS Code need to have tight control over performance. Since much of the power is in the plugins ecosystem, the core needs to be as core and lightweight as possible and is responsible for its wide adoption.

Microsoft Edge also recently moved off of React for the same reason.

If you find yourself in this case where you need the performance of direct DOM manipulation – lower level programming than using a framework – hopefully this article will help!

Tips for More Efficient DOM Manipulation

Prefer hiding/showing over creating new elements

Keeping your DOM unchanged by hiding and showing elements instead of destroying and creating them with JavaScript is always going to be the more performant option.

Server render your element and hide/show it with a class (and appropriate CSS ruleset) like el.classList.add('show') or el.style.display = 'block' instead of creating and inserting the element dynamically with JavaScript. The mostly static DOM is much more performant due to the lack of garbage collection calls and complex client logic.

Don’t create DOM nodes on the client dynamically if you can avoid it.

But do remember assistive technology. If you want an element both visually hidden and hidden to assistive technology, display: none; should do it. But if you want to hide an element and keep it there for assistive technology, look at other methods for hiding content.

Prefer textContent over innerText for reading the content of an element

The innerText method is cool because it is aware of the current styles of an element. It knows if an element is hidden or not, and only gets text if something is actually displaying. The issue with it is that this process of checking styles forces reflow, and is slower.

Reading content with element.textContent is much faster than element.innerText, so prefer textContent for reading content of an element where possible.

Use insertAdjacentHTML over innerHTML

The insertAdjacentHTML method is much faster than innerHTML because it doesn’t have to destroy the DOM first before inserting. The method is flexible in where it places the new HTML, for example:

el.insertAdjacentHTML("afterbegin", html);
el.insertAdjacentHTML("beforeend", html);

The Fastest Approach is to use insertAdjacentElement or appendChild

Approach #1: Use the template tag to create HTML templates and appendChild to insert new HTML

These are the fastest methods are to append fully formed DOM elements. An established pattern for this is to create an HTML template with the <template> tag to create the elements, then insert them into the DOM with insertAdjacentElement or appendChild methods.

<template id="card_template">
  <article class="card">
    <h3></h3>
    <div class="card__body">
      <div class='card__body__image'></div>
      <section class='card__body__content'>
      </section>
    </div>
  </article>
</template>
function createCardElement(title, body) {
  const template = document.getElementById('card_template');
  const element = template.content.cloneNode(true).firstElementChild;
  const [cardTitle] = element.getElementsByTagName("h3");
  const [cardBody] = element.getElementsByTagName("section");
  [cardTitle.textContent, cardBody.textContent] = [title, body];
  return element;
}

container.appendChild(createCardElement(
  "Frontend System Design: Fundamentals",
  "This is a random content"
))

You can see this in action in the new, Front-End System Design course, where Evgenni builds an infinite scrolling social news feed from scratch!

Approach #2: Use createDocumentFragment with appendChild to Batch Inserts

DocumentFragment is a lightweight, “empty” document object that can hold DOM nodes. It’s not part of the active DOM tree, making it ideal for preparing multiple elements for insertion.

const fragment = document.createDocumentFragment();
for (let i = 0; i < 1000; i++) {
  const li = document.createElement('li');
  li.textContent = `Item ${i}`;
  fragment.appendChild(li);
}
document.getElementById('myList').appendChild(fragment);

This approach minimizes reflows and repaints by inserting all elements at once, rather than individually.

Manage References When Nodes are Removed

When you remove a DOM node, you don’t want references sitting around that prevent the garbage collector from cleaning up associated data. We can use WeakMap and WeakRef to avoid leaky references.

Associate data to DOM nodes with WeakMap

You can associate data to DOM nodes using WeakMap. That way if the DOM node is removed later, the reference to the data will be gone for good.

let DOMdata = { 'logo': 'Frontend Masters' };
let DOMmap = new WeakMap();
let el = document.querySelector(".FmLogo");
DOMmap.set(el, DOMdata);
console.log(DOMmap.get(el)); // { 'logo': 'Frontend Masters' }
el.remove(); // DOMdata is able to be garbage collected

Using weak maps ensures references to data doesn’t stick around if a DOM element is removed.

Clean up after Garbage Collection using WeakRef

In the following example, we are creating a WeakRef to a DOM node:

class Counter {
  constructor(element) {
    // Remember a weak reference to the DOM element
    this.ref = new WeakRef(element);
    this.start();
  }

  start() {
    if (this.timer) {
      return;
    }

    this.count = 0;

    const tick = () => {
      // get the element from the weak reference, if it still exists
      const element = this.ref.deref();
      if (element) {
        console.log("Element is still in memory, updating count.")
        element.textContent = `Counter: ${++this.count}`;
      } else {
        // The element doesn't exist anymore
        console.log("Garabage Collector ran and element is GONE – clean up interval");
        this.stop();
        this.ref = null;
      }
    };

    tick();
    this.timer = setInterval(tick, 1000);
  }

  stop() {
    if (this.timer) {
      clearInterval(this.timer);
      this.timer = 0;
    }
  }
}

const counter = new Counter(document.getElementById("counter"));
setTimeout(() => {
  document.getElementById("counter").remove();
}, 5000);

After removing the node, you can watch your console to see when the actual garbage collection happens, or you can force it to happen yourself using the Performance tab in your developer tools:

Then you can be sure all references are gone and timers are cleaned up.

Note: Try not to overuse WeakRef—this magic does come at a cost. It’s better for performance if you can explicitly manage references.

Cleaning up Event Listeners

Manually remove events with removeEventListener

function handleClick() {
  console.log("Button was clicked!");
  el.removeEventListener("click", handleClick);
}

// Add an event listener to the button
const el = document.querySelector("#button");
el.addEventListener("click", handleClick);

Use the once param for one and done events

This same behavior as above could be achieved with the “once” param:

el.addEventListener('click', handleClick, {
  once: true
});

Adding a third parameter to addEventListener with a boolean value indicating that the listener should be invoked at most once after being added. The listener is automatically removed when invoked.

Use event delegation to bind fewer events

If you are building up and replacing nodes frequently in a highly dynamic component, it’s more expensive to have to setup all their respective event listeners as you’re building the nodes.

Instead, you can bind an event closer to the root level. Since events bubble up the DOM, you can check the event.target (original target of the event) to catch and respond to the event.

Using matches(selector) only matches the current element, so it needs to be the leaf node.

const rootEl = document.querySelector("#root");
// Listen for clicks on the entire window
rootEl.addEventListener('click', function (event) {
  // if the element is clicked has class "target-element"
  if (event.target.matches('.target-element')) doSomething();
});

More than likely, you’ll have elements like <div class="target-element"><p>...</p></div> in this case you’d need to use the .closest(element) method.

const rootEl = document.querySelector("#root");
// Listen for clicks on the entire window
rootEl.addEventListener('click', function (event) {
  // if the element is clicked has a parent with "target-element"
  if (event.target.closest('.target-element')) doSomething();
});

This method allows you to not worry about attaching and removing listeners after dynamically injecting elements.

Use AbortController to unbind groups of events

const button = document.getElementById('button');
const controller = new AbortController();
const { signal } = controller;

button.addEventListener(
  'click', 
  () => console.log('clicked!'), 
  { signal }
);

// Remove the listener!
controller.abort();

You can use the AbortController to remove sets of events.

let controller = new AbortController();
const { signal } = controller;

button.addEventListener('click', () => console.log('clicked!'), { signal });
window.addEventListener('resize', () => console.log('resized!'), { signal });
document.addEventListener('keyup', () => console.log('pressed!'), { signal });

// Remove all listeners at once:
controller.abort();

Shout out to Alex MacArthur for the AbortController code example on this one.

Profiling & Debugging

Measure your DOM to make sure it is not too large.

Here’s a brief guide on using Chrome DevTools for memory profiling:

  1. Open Chrome DevTools
  2. Go to the “Memory” tab
  3. Choose “Heap snapshot” and click “Take snapshot”
  4. Perform your DOM operations
  5. Take another snapshot
  6. Compare snapshots to identify memory growth

Key things to look for:

  • Unexpectedly retained DOM elements
  • Large arrays or objects that aren’t being cleaned up
  • Increasing memory usage over time (potential memory leak)

You can also use the “Performance” tab to record memory usage over time:

  1. Go to the “Performance” tab
  2. Check “Memory” in the options
  3. Click “Record”
  4. Perform your DOM operations
  5. Stop recording and analyze the memory graph

This will help you visualize memory allocation and identify potential leaks or unnecessary allocations during DOM manipulation.

JavaScript Execution Time Analysis

In addition to memory profiling, the Performance tab in Chrome DevTools is invaluable for analyzing JavaScript execution time, which is crucial when optimizing DOM manipulation code.

Here’s how to use it:

  1. Open Chrome DevTools and go to the “Performance” tab
  2. Click the record button
  3. Perform the DOM operations you want to analyze
  4. Stop the recording
Excerpt from ThePrimeagen's 
Inspecting & Debugging Performance lesson

The resulting timeline will show you:

  • JavaScript execution (yellow)
  • Rendering activities (purple)
  • Painting (green)

Look for:

  • Long yellow bars, indicating time-consuming JavaScript operations
  • Frequent short yellow bars, which might indicate excessive DOM manipulation

To dive deeper:

  • Click on a yellow bar to see the specific function call and its execution time
  • Look at the “Bottom-Up” and “Call Tree” tabs to identify which functions are taking the most time

This analysis can help you pinpoint exactly where your DOM manipulation code might be causing performance issues, allowing for targeted optimizations.

Performance Debugging Resources

Articles by the Chrome dev team:

Courses that cover the memory and performance analysis and Chrome Dev Tools further:

Remember, efficient DOM manipulation isn’t just about using the right methods—it’s also about understanding when and how often you’re interacting with the DOM. Excessive manipulation, even with efficient methods, can still lead to performance issues.

Key Takeaways for DOM Optimization

Efficient DOM manipulation knowledge is important when creating performance-sensitive web apps. While modern frameworks offer convenience and abstraction, understanding and applying these low-level techniques can significantly boost your app’s performance, especially in demanding scenarios.

Here’s a recap:

  1. Prefer modifying existing elements over creating new ones when possible.
  2. Use efficient methods like textContent, insertAdjacentHTML, and appendChild.
  3. Manage references carefully, leveraging WeakMap and WeakRef to avoid memory leaks.
  4. Clean up event listeners properly to prevent unnecessary overhead.
  5. Consider techniques like event delegation for more efficient event handling.
  6. Use tools like AbortController for easier management of multiple event listeners.
  7. Employ DocumentFragments for batch insertions and understand concepts like the virtual DOM for broader optimization strategies.

Remember, the goal isn’t always to forgo frameworks and manually manipulate the DOM for every project. Rather, it’s to understand these principles so you can make informed decisions about when to use frameworks and when to optimize at a lower level. Tools like memory profiling and performance benchmarking can guide these decisions.

Article Series

]]>
https://frontendmasters.com/blog/patterns-for-memory-efficient-dom-manipulation/feed/ 1 1551
Sending My Respect to Next.js (and Vercel) https://frontendmasters.com/blog/respect-to-next-js-and-vercel/ https://frontendmasters.com/blog/respect-to-next-js-and-vercel/#comments Tue, 20 Feb 2024 14:20:31 +0000 https://frontendmasters.com/blog/?p=871 Today, I did some maintenance work on a Next.js course website (we have tons of them built on Next.js), and I thought to myself:

“Wow, this framework has been around for a long time and continues to evolve. It is certainly not a one-hit-wonder.”

For context, I’m generally more of a purist, opting to use vanilla JavaScript and building on the web platform in most situations. Even so, I wanted to acknowledge my respect for the framework and those who have worked hard to develop and evolve Next.js (and, more broadly, React). It is certainly giving us new ways to think about building web apps.

Next.js wasn’t always the king.

To new folks in the industry: Next.js wasn’t always on top. For instance, I remember when Gatsby was constantly in the news as one of the first significant meta frameworks built on React. It was the first framework to build static sites with JSX on the front and back end. As folks hit limits in the framework and pushed against its edges, it could not come up with solutions and eventually fell out of favor.

Today, Astro is filling that gap of static sites. But if you want a complete application development ecosystem on this paradigm, Next.js is currently it.

Frontend Masters has been teaching Next.js since 2020.

Next.js has been building for years – our first course on Next.js was released back in 2020. Our Node.js teacher, Scott Moss, loved the framework and convinced us to continue releasing course updates as the Next.js evolved. After the framework released App router and server actions, Scott returned to teach v3 of the Intro to Next.js course.

It takes a lot to remain in the hearts of developers for years.

Remaining in the zeitgeist is always impressive to see. And even more-so now, when everyone is focused on where the framework is going to do Next (see what I did there).

Drawing the boundaries thinner between infrastructure, the server, and, ultimately, the client is a daunting task. Even if it’s not the best approach for every problem, it pushes the boundaries of what’s possible through an approach that respects interactivity as a first-class citizen.

Note that when I say Next is not the best approach for everything, that’s more so because of Node.js as a platform. Frontend Masters is built on Go, which we think fits our needs best for our given set of challenges. These ideas of putting interactivity first will eventually make their way into other frameworks and platforms as time passes.

We are excited by the new ideas emerging from the React/Next.js community!

At Frontend Masters, we have built a lot on Next.js: course websites, full-stack projects, and courses. We will continue to release courses on lower levels of the stack and welcome the ideas that Next.js is bringing to our web developer ecosystem!

Whatever happens from here, I wanted to write this little piece to give my respect and ensure folks in the community. We want developers to build the best apps possible and their dream careers. And if that’s increasingly on Next.js, we’ll be here for it, doing our best to teach the framework and everything underneath it (JavaScript, TypeScript, React, Browser APIs, etc).

✌️

]]>
https://frontendmasters.com/blog/respect-to-next-js-and-vercel/feed/ 3 871
How We’re Supporting Open Source Maintainers ($50,000) https://frontendmasters.com/blog/how-were-supporting-open-source-maintainers/ https://frontendmasters.com/blog/how-were-supporting-open-source-maintainers/#respond Wed, 10 Jan 2024 14:20:04 +0000 https://frontendmasters.com/blog/?p=420 I’m a huge proponent of open source, and here at Frontend Masters, we’re constantly looking for ways to support and promote various projects. Whether it’s through creating courseware for new projects, teaching our students how to work with them, engaging with maintainers, or even donating directly to some projects.

In fact, since 2019, we’ve donated over $250,000 via Open Collective.

This year, though, we’ve decided to do things slightly differently. Inspired by Chad Whitacre @ Sentry, rather than large donations to a handful of projects, we are switching to donating to as much of our dependency tree as possible.

As a first step in this experiment, we’ve allocated an initial budget of $50,000. We will distribute that money across the three channels detailed below.

10% of funds donated via GitHub Sponsors

We will donate $5,000 over the next 12 months to as many dependencies as possible on GitHub Sponsors. Our final list totaled 161 organizations and maintainers, ranging from $2 to $7 monthly. We achieved 93% coverage as some maintainers had set a minimum donation threshold larger than $10/mo.

While some may not see the value of smaller donations, we’ve heard from many that even small amounts can be motivating. In fact, according to the Linux Foundation’s 2020 FOSS Contributor Survey, most maintainers rank making an impact on the world, doing great work, and learning above getting paid. As such, we’d like to think of our small donation as a way to recognize and hopefully motivate a larger pool of maintainers.

10% of funds donated via Open Collective

Furthermore, we think it’s unhealthy for the ecosystem to have the larger projects sitting on multiple $100k of inactive funds in their Open Collective account while smaller projects go unrecognized.

Further to our dependency tree, there are projects central to our courses or some that we think are critical tools or infrastructure projects that don’t appear in dependency trees.

We internally curated and ranked a list of 18 projects we’ve funded via Open Collective.

The remainder of the funds will be distributed via thanks.dev.

Like Sentry’s experience, we’ve found scaling open-source sponsorship with thanks.dev a breeze. Our dependency tree consists of 1,100+ GitHub and GitLab projects, of which over 350 are fundable via various channels.

As noted previously, our goal this year is to achieve fairness and work towards improving the health of the communities we rely on. Our partnership with thanks.dev has allowed us to avoid the logistical nightmare of manually managing that many donations.

We’ve allocated a monthly budget of $3,125 and ranked our repos and the ecosystems we depend on. Thanks.dev is handling the rest.

As of now, further to our donations on GitHub Sponsors, there are also 148 projects receiving funds via thanks.dev with amounts ranging from $10 to $147. The full list is available here.

Final Thoughts

The beauty of Open Source lies in its near-pure meritocratic essence: projects providing significant value thrive with downloads and contributions, while others struggle to grow in contrast. However, the Linux Foundation’s 2020 OSS Contributor Survey also found that many maintainers have considered quitting due to burnout, with higher risk among popular projects.

Our experiment this year is an attempt to shine a light on this threat to open source.

While it’s commendable that large tech companies support the more significant Open Source projects via donations to foundations and providing compute resources & staffing, we’d like to encourage companies of all sizes to start supporting the smaller projects as well.

Finally, we Frontend Masters joined the FOSS Funders to learn from other companies sponsoring open source. We want to work with them to iterate and find a sustainable model that supports the hard work that goes into the software we all use and love!

I’d love to hear your feedback on our approach to funding open source; cheers!

]]>
https://frontendmasters.com/blog/how-were-supporting-open-source-maintainers/feed/ 0 420
Patterns for Reactivity with Modern Vanilla JavaScript https://frontendmasters.com/blog/vanilla-javascript-reactivity/ https://frontendmasters.com/blog/vanilla-javascript-reactivity/#comments Mon, 21 Aug 2023 03:41:00 +0000 http://fem.flywheelsites.com/?p=40 “Reactivity” is how systems react to changes in data. There are many types of reactivity, but for this article, reactivity is when data changes, you do things.

Article Series

Reactivity Patterns are Core to Web Development

We handle a lot with JavaScript in websites and web apps since the browser is an entirely asynchronous environment. We must respond to user inputs, communicate with servers, log, perform, etc. All these tasks involve updates to the UI, Ajax requests, browser URLs, and navigation changes, making cascading data changes a core aspect of web development.

As an industry, we associate reactivity with frameworks, but you can learn a lot by implementing reactivity in pure JavaScript. We can mix and match these patterns to wire behavior to data changes.

Learning core patterns with pure JavaScript will lead to less code and better performance in your web apps, no matter what tool or framework you use.

I love learning patterns because they apply to any language and system. Patterns can be combined to solve your app’s exact requirements, often leading to more performant and maintainable code.

Hopefully, you’ll learn new patterns to add to your toolbox, no matter what frameworks and libraries you use!

PubSub Pattern (Publish Subscriber)

PubSub is one of the most foundational patterns for reactivity. Firing an event out with publish() allows anyone to listen to that event subscribe() and do whatever they want in a decoupled from whatever fires that event.

const pubSub = {
  events: {},
  subscribe(event, callback) {
    if (!this.events[event]) this.events[event] = [];
    this.events[event].push(callback);
  },
  publish(event, data) {
    if (this.events[event]) this.events[event].forEach(callback => callback(data));
  }
};

pubSub.subscribe('update', data => console.log(data));
pubSub.publish('update', 'Some update'); // Some update

Note the publisher has no idea of what is listening to it, so there is no way to unsubscribe or clean up after itself with this simple implementation.

Custom Events: Native Browser API for PubSub

The browser has a JavaScript API for firing and subscribing to custom events. It allows you to send data along with the custom events using dispatchEvent.

const pizzaEvent = new CustomEvent("pizzaDelivery", {
  detail: {
    name: "supreme",
  },
});

window.addEventListener("pizzaDelivery", (e) => console.log(e.detail.name));
window.dispatchEvent(pizzaEvent);

You can scope these custom events to any DOM node. In the code example, we use the global window object, also known as a global event bus, so anything in our app can listen and do something with the event data.

<div id="pizza-store"></div>
const pizzaEvent = new CustomEvent("pizzaDelivery", {
  detail: {
    name: "supreme",
  },
});

const pizzaStore = document.querySelector('#pizza-store');
pizzaStore.addEventListener("pizzaDelivery", (e) => console.log(e.detail.name));
pizzaStore.dispatchEvent(pizzaEvent);

Class Instance Custom Events: Subclassing EventTarget

We can subclass EventTarget to send out events on a class instance for our app to bind to:

class PizzaStore extends EventTarget {
  constructor() {
    super();
  }
  addPizza(flavor) {
    // fire event directly on the class
    this.dispatchEvent(new CustomEvent("pizzaAdded", {
      detail: {
        pizza: flavor,
      },
    }));
  }
}

const Pizzas = new PizzaStore();
Pizzas.addEventListener("pizzaAdded", (e) => console.log('Added Pizza:', e.detail.pizza));
Pizzas.addPizza("supreme");

The cool thing about this is your events aren’t firing globally on the window. You can fire an event directly on a class; anything in your app can wire up event listeners directly to that class.

Observer Pattern

The observer pattern has the same basic premise as the PubSub pattern. It allows you to have behavior “subscribed” to a Subject. And when the Subject fires the notify method, it notifies everything subscribed.

class Subject {
  constructor() {
    this.observers = [];
  }
  addObserver(observer) {
    this.observers.push(observer);
  }
  removeObserver(observer) {
    const index = this.observers.indexOf(observer);
    if (index > -1) {
      this.observers.splice(index, 1);
    }
  }
  notify(data) {
    this.observers.forEach(observer => observer.update(data));
  }
}

class Observer {
  update(data) {
    console.log(data);
  }
}

const subject = new Subject();
const observer = new Observer();

subject.addObserver(observer);
subject.notify('Everyone gets pizzas!');

The main difference between this and PubSub is that the Subject knows about its observers and can remove them. They aren’t completely decoupled like in PubSub.

Reactive Object Properties with Proxies

Proxies in JavaScript can be the foundation for performing reactivity after setting or getting properties on an object.

const handler = {
  get: function(target, property) {
    console.log(`Getting property ${property}`);
    return target[property];
  },
  set: function(target, property, value) {
    console.log(`Setting property ${property} to ${value}`);
    target[property] = value;
    return true; // indicates that the setting has been done successfully
  }
};

const pizza = { name: 'Margherita', toppings: ['tomato sauce', 'mozzarella'] };
const proxiedPizza = new Proxy(pizza, handler);

console.log(proxiedPizza.name); // Outputs "Getting property name" and "Margherita"
proxiedPizza.name = 'Pepperoni'; // Outputs "Setting property name to Pepperoni"

When you access or modify a property on the proxiedPizza, it logs a message to the console. But you could imagine wiring any functionality to property access on an object.

Reactive Individual Properties: Object.defineProperty

You can do an identical thing for a specific property using Object.defineProperty. You can define getters and setters for properties and run code when a property is accessed or modified.

const pizza = {
  _name: 'Margherita', // Internal property
};

Object.defineProperty(pizza, 'name', {
  get: function() {
    console.log(`Getting property name`);
    return this._name;
  },
  set: function(value) {
    console.log(`Setting property name to ${value}`);
    this._name = value;
  }
});

// Example usage:
console.log(pizza.name); // Outputs "Getting property name" and "Margherita"
pizza.name = 'Pepperoni'; // Outputs "Setting property name to Pepperoni"

Here, we’re using Object.defineProperty to define a getter and setter for the name property of the pizza object. The actual value is stored in a private _name property, and the getter and setter provide access to that value while logging messages to the console.

Object.defineProperty is more verbose than using a Proxy, especially if you want to apply the same behavior to many properties. But it’s a powerful and flexible way to define custom behavior for individual properties.

Asynchronous Reactive Data with Promises

Let’s make using the observers asynchronous! This way we can update the data and have multiple observers run asynchronously.

class AsyncData {
  constructor(initialData) {
    this.data = initialData;
    this.subscribers = [];
  }

  // Subscribe to changes in the data
  subscribe(callback) {
    if (typeof callback !== 'function') {
      throw new Error('Callback must be a function');
    }
    this.subscribers.push(callback);
  }

  // Update the data and wait for all updates to complete
  async set(key, value) {
    this.data[key] = value;

    // Call the subscribed function and wait for it to resolve
    const updates = this.subscribers.map(async (callback) => {
      await callback(key, value);
    });

    await Promise.allSettled(updates);
  }
}

Here’s a class that wraps a data object and triggers an update when the data changes.

Awaiting Our Async Observers

Let’s say we want to wait until all subscriptions to our asynchronous reactive data are processed:

const data = new AsyncData({ pizza: 'Pepperoni' });

data.subscribe(async (key, value) => {
  await new Promise(resolve => setTimeout(resolve, 500));
  console.log(`Updated UI for ${key}: ${value}`);
});

data.subscribe(async (key, value) => {
  await new Promise(resolve => setTimeout(resolve, 1000));
  console.log(`Logged change for ${key}: ${value}`);
});

// function to update data and wait for all updates to complete
async function updateData() {
  await data.set('pizza', 'Supreme'); // This will call the subscribed functions and wait for their promises to resolve
  console.log('All updates complete.');
}

updateData();

Our updateData function is now async, so we can await all the subscribed functions to resolve before continuing our program. This pattern allows juggling asynchronous reactivity a bit simpler.

Reactive Systems

Many more complex reactive systems are at the foundations of popular libraries and frameworks: hooks in React, Signals in Solid, Observables in Rx.js, and more. They usually have the same basic premise of when data changes, re-render the components or associated DOM fragments.

Observables (Pattern of Rx.js)

Observables and Observer Pattern are not the same despite being nearly the same word, lol.

Observables allow you to define a way to produce a sequence of values over time. Here is a simple Observable primitive that provides a way to emit a sequence of values to subscribers, allowing them to react as those values are produced.

class Observable {
  constructor(producer) {
    this.producer = producer;
  }

  // Method to allow a subscriber to subscribe to the observable
  subscribe(observer) {
    // Ensure the observer has the necessary functions
    if (typeof observer !== 'object' || observer === null) {
      throw new Error('Observer must be an object with next, error, and complete methods');
    }

    if (typeof observer.next !== 'function') {
      throw new Error('Observer must have a next method');
    }

    if (typeof observer.error !== 'function') {
      throw new Error('Observer must have an error method');
    }

    if (typeof observer.complete !== 'function') {
      throw new Error('Observer must have a complete method');
    }

    const unsubscribe = this.producer(observer);

    // Return an object with an unsubscribe method
    return {
      unsubscribe: () => {
        if (unsubscribe && typeof unsubscribe === 'function') {
          unsubscribe();
        }
      },
    };
  }
}

Here’s how you would use them:

// Create a new observable that emits three values and then completes
const observable = new Observable(observer => {
  observer.next(1);
  observer.next(2);
  observer.next(3);
  observer.complete();

  // Optional: Return a function to handle any cleanup if the observer unsubscribes
  return () => {
    console.log('Observer unsubscribed');
  };
});

// Define an observer with next, error, and complete methods
const observer = {
  next: value => console.log('Received value:', value),
  error: err => console.log('Error:', err),
  complete: () => console.log('Completed'),
};

// Subscribe to the observable
const subscription = observable.subscribe(observer);

// Optionally, you can later unsubscribe to stop receiving values
subscription.unsubscribe();

The critical component of an Observable is the next() method, which sends data to the observers. A complete() method for when the Observable stream closes. And an error() method when something goes wrong. Also, there has to be a way to subscribe() to listen for changes and unsubscribe() to stop receiving data from the stream.

The most popular libraries that use this pattern are Rx.js and MobX.

“Signals” (Pattern of SolidJS)

Hat tip Ryan Carniato’s Reactivity with SolidJS course.

const context = [];

export function createSignal(value) {
    const subscriptions = new Set();

    const read = () => {
        const observer = context[context.length - 1]
        if (observer) subscriptions.add(observer);
        return value;
    }
    const write = (newValue) => {
        value = newValue;
        for (const observer of subscriptions) {
            observer.execute()
        }
    }

    return [read, write];
}

export function createEffect(fn) {
    const effect = {
        execute() {
            context.push(effect);
            fn();
            context.pop();
        }
    }

    effect.execute();
}

Using the reactive system:

import { createSignal, createEffect } from "./reactive";

const [count, setCount] = createSignal(0);

createEffect(() => {
  console.log(count());
}); // 0

setCount(10); // 10

Here’s the complete code for his vanilla reactivity system with a code sample that Ryan writes in his course.

“Observable-ish” Values (Frontend Masters)

Our Frontend Masters video player has many configurations that could change anytime to modify video playback. Kai on our team created “Observable-ish” Values (many years ago now, but we just published it for this article’s sake), which is another take on a reactive system in vanilla JavaScript.

It’s less than 100 lines of code and has stood the test of time! For 7+ years, this tiny bit of code has underpinned delivering millions of hours of video. It’s a mix of PubSub with the ability to have computed values by adding the results of multiple publishers together.

Here’s how you use the “Observable-ish” values. Publish changes to subscriber functions when values change:

const fn = function(current, previous) {}

const obsValue = ov('initial');
obsValue.subscribe(fn);      // subscribe to changes
obsValue();                  // 'initial'
obsValue('initial');         // identical value, no change
obsValue('new');             // fn('new', 'initial')
obsValue.value = 'silent';   // silent update

Modifying arrays and objects will not publish, but replacing them will.

const obsArray = ov([1, 2, 3]);
obsArray.subscribe(fn);
obsArray().push(4);          // silent update
obsArray.publish();          // fn([1, 2, 3, 4]);
obsArray([4, 5]);            // fn([4, 5], [1, 2, 3]);

Passing a function caches the result as the value. Any extra arguments will be passed to the function. Any observables called within the function will be subscribed to, and updates to those observables will recompute the value.

Child observables must be called; mere references are ignored. If the function returns a Promise, the value is assigned async after resolution.

const a = ov(1);
const b = ov(2);
const computed = ov(arg => { a() + b() + arg }, 3);
computed.subscribe(fn);
computed();                  // fn(6)
a(2);                        // fn(7, 6)

Reactive Rendering of UI

Here are some patterns for writing and reading from the DOM and CSS.

Render Data to HTML String Literals

Here’s a simple example of rendering some pizza UI based on data.

function PizzaRecipe(pizza) {
  return `<div class="pizza-recipe">
    <h1>${pizza.name}</h1>
    <h3>Toppings: ${pizza.toppings.join(', ')}</h3>
    <p>${pizza.description}</p>
  </div>`;
}

function PizzaRecipeList(pizzas) {
  return `<div class="pizza-recipe-list">
    ${pizzas.map(PizzaRecipe).join('')}
  </div>`;
}

var allPizzas = [
  {
    name: 'Margherita',
    toppings: ['tomato sauce', 'mozzarella'],
    description: 'A classic pizza with fresh ingredients.'
  },
  {
    name: 'Pepperoni',
    toppings: ['tomato sauce', 'mozzarella', 'pepperoni'],
    description: 'A favorite among many, topped with delicious pepperoni.'
  },
  {
    name: 'Veggie Supreme',
    toppings: ['tomato sauce', 'mozzarella', 'bell peppers', 'onions', 'mushrooms'],
    description: 'A delightful vegetable-packed pizza.'
  }
];

// Render the list of pizzas
function renderPizzas() {
  document.querySelector('body').innerHTML = PizzaRecipeList(allPizzas);
}

renderPizzas(); // Initial render

// Example of changing data and re-rendering
function addPizza() {
  allPizzas.push({
    name: 'Hawaiian',
    toppings: ['tomato sauce', 'mozzarella', 'ham', 'pineapple'],
    description: 'A tropical twist with ham and pineapple.'
  });

  renderPizzas(); // Re-render the updated list
}

// Call this function to add a new pizza and re-render the list
addPizza();

addPizza demonstrates how to change the data by adding a new pizza recipe to the list and then re-rendering the list to reflect the changes.

The main drawback of this approach is you blow away the entire DOM on every render. You can more intelligently update only the bits of DOM that change using a library like lit-html (lit-html usage guide). We do this with several highly dynamic components on Frontend Masters, like our data grid component.

See examples of other approaches in the Vanilla TodoMVC repo and associated Vanilla TodoMVC article.

Reactive DOM Attributes: MutationObserver

One way to make DOM reactive is to add and remove attributes. We can listen to changes in attributes using the MutationObserver API.

const mutationCallback = (mutationsList) => {
  for (const mutation of mutationsList) {
    if (
      mutation.type !== "attributes" ||
      mutation.attributeName !== "pizza-type"
    ) return;

    console.log('old:', mutation.oldValue)
    console.log('new:', mutation.target.getAttribute("pizza-type"))
  }
}
const observer = new MutationObserver(mutationCallback);
observer.observe(document.getElementById('pizza-store'), { attributes: true });

Now we can update the pizza-type attribute from anywhere in our program, and the element itself can have behavior attached to updating that attribute!

Reactive Attributes in Web Components

With Web Components, there is a native way to listen and react to attribute updates.

class PizzaStoreComponent extends HTMLElement {
  static get observedAttributes() {
    return ['pizza-type'];
  }

  constructor() {
    super();
    const shadowRoot = this.attachShadow({ mode: 'open' });
    shadowRoot.innerHTML = `<p>${this.getAttribute('pizza-type') || 'Default Content'}</p>`;
  }

  attributeChangedCallback(name, oldValue, newValue) {
    if (name === 'my-attribute') {
      this.shadowRoot.querySelector('div').textContent = newValue;
      console.log(`Attribute ${name} changed from ${oldValue} to ${newValue}`);
    }
  }
}

customElements.define('pizza-store', PizzaStoreComponent);
<pizza-store pizza-type="Supreme"></pizza-store>
document.querySelector('pizza-store').setAttribute('pizza-type', 'BBQ Chicken!');

This is a bit simpler, but we have to use Web Components to use this API.

Reactive Scrolling: IntersectionObserver

We can wire reactivity to DOM elements scrolling into view. I’ve used this for slick animations on our marketing pages.

var pizzaStoreElement = document.getElementById('pizza-store');

var observer = new IntersectionObserver(function(entries, observer) {
  entries.forEach(function(entry) {
    if (entry.isIntersecting) {
      entry.target.classList.add('animate-in');
    } else {
      entry.target.classList.remove('animate-in');
    }
  });
});

observer.observe(pizzaStoreElement);

Here’s an example scrolling animation CodePen in very few lines of code using IntersectionObserver.

Animation & Game Loop: requestAnimationFrame

When working with game development, Canvas, WebGL, or those wild marketing sites, animations often require writing to a buffer and then writing the results on a given loop when the rendering thread becomes available. We do this with requestAnimationFrame.

function drawStuff() {
  // This is where you'd do game or animation rendering logic
}

// function to handle the animation
function animate() {
  drawStuff();
  requestAnimationFrame(animate); // Continually calls animate when the next render frame is available
}

// Start the animation
animate();

This is the method games and anything that involves real-time rendering use to render the scene when frames become available.

Reactive Animations: Web Animations API

You can also create reactive animations with the Web Animations API. Here we will animate an element’s scale, position, and color using the animation API.

const el = document.getElementById('animatedElement');

// Define the animation properties
const animation = el.animate([
  // Keyframes
  { transform: 'scale(1)', backgroundColor: 'blue', left: '50px', top: '50px' },
  { transform: 'scale(1.5)', backgroundColor: 'red', left: '200px', top: '200px' }
], {
  // Timing options
  duration: 1000,
  fill: 'forwards'
});

// Set the animation's playback rate to 0 to pause it
animation.playbackRate = 0;

// Add a click event listener to the element
el.addEventListener('click', () => {
  // If the animation is paused, play it
  if (animation.playbackRate === 0) {
    animation.playbackRate = 1;
  } else {
    // If the animation is playing, reverse it
    animation.reverse();
  }
});

What’s reactive about this is that the animation can play relative to where it is located when an interaction occurs (in this case, reversing its direction). Standard CSS animations and transitions aren’t relative to their current position.

Reactive CSS: Custom Properties and calc

Lastly, we can write CSS that’s reactive by combining custom properties and calc.

barElement.style.setProperty('--percentage', newPercentage);

In JavaScript, you can set a custom property value.

.bar {
  width: calc(100% / 4 - 10px);
  height: calc(var(--percentage) * 1%);
  background-color: blue;
  margin-right: 10px;
  position: relative;
}

And in the CSS, we can now do calculations based on that percentage. It’s pretty cool that we can add calculations right into the CSS and let CSS do its job of styling without having to keep all that rendering logic in JavaScript.

FYI: You can also read these properties if you want to create changes relative to the current value.

getComputedStyle(barElement).getPropertyValue('--percentage');

The Many Ways to Achieve Reactivity

It’s incredible how many ways we can achieve reactivity using very little code in modern vanilla JavaScript. We can combine these patterns in any way we see fit for our apps to reactively render, log, animate, handle user events, and all the things that can happen in the browser.

Next, check out the JavaScript Learning Path and learn JavaScript deeply from awesome instructors like Anjana Vakil, Will Sentance and Kyle Simpson! Or dive right into the most loved course on the platform, JavaScript: The Hard Parts!

~ Frontend Masters Team

Article Series

]]>
https://frontendmasters.com/blog/vanilla-javascript-reactivity/feed/ 2 40
Writing a TodoMVC App with Modern Vanilla JavaScript https://frontendmasters.com/blog/vanilla-javascript-todomvc/ https://frontendmasters.com/blog/vanilla-javascript-todomvc/#respond Thu, 08 Sep 2022 18:25:00 +0000 http://fem.flywheelsites.com/?p=7 I took a shot at coding TodoMVC with modern (ES6+), vanilla JavaScript, and it only took ~170 lines of code and just over an hour! Compare this to the old/official TodoMVC vanilla JS solution, which has over 900 lines of code. An 80%+ reduction in code! I ❤️ the new state of JavaScript.

The code has received over 🤩 600 stars on GitHub:

In general, the responses were very positive. But as with all popular things, eventually, they spark debate.

Article Series

React / Frameworks vs. Vanilla JS: Top Four Arguments for Frameworks

#1: “Frameworks Enable Declarative UI”

Modern frameworks like React and Vue don’t exist to fill in the gap left by native JS, they exist so that you write your application in a declarative way where the view is rendered as a function of state.

IMO this is simply a design pattern. Patterns apply in any language.

You can accomplish roughly the same thing in vanilla JavaScript. In my code, when the model changes, it fires a save event, and then I wire App.render() to it, which renders the App using the Todos model.

Todos.addEventListener('save', App.render);

Template strings end up pretty easy to work with when you want to re-render parts of the App from scratch as a framework would:

`
  <div class="view">
    <input class="toggle" type="checkbox" ${todo.completed ? 'checked' : ''}>
    <label></label>
    <button class="destroy"></button>
  </div>
  <input class="edit">
`

The entire App render method is only eleven lines, and it re-renders everything the App needs to based on the state of the App:

render() {
  const count = Todos.all().length;
  App.$.setActiveFilter(App.filter);
  App.$.list.replaceChildren(
    ...this.Todos.all(this.filter).map((todo) => this.renderTodo(todo))
  );
  App.$.showMain(count);
  App.$.showFooter(count);
  App.$.showClear(Todos.hasCompleted());
  App.$.toggleAll.checked = Todos.isAllCompleted();
  App.$.displayCount(Todos.all('active').length);
}

Here I could have chosen to rebuild the entire UI as a template string as a function of state, but instead, it is ultimately more performant to create these DOM helper methods and modify what I want.

#2: “Frameworks Provide Input Sanitization”

The best way to sanitize user input is to use node.textContent.

insertHTML(li, `
  <div class="view">
    <input class="toggle" type="checkbox" ${todo.completed ? 'checked' : ''}>
    <label></label>
    <button class="destroy"></button>
  </div>
  <input class="edit">
`);
li.querySelector('label').textContent = todo.title;

Any user input must be set to the DOM using textContent. If you do that, then you’re fine.

Beyond this, there is a new Trusted Types API for sanitizing generated HTML. I would use this new API if I were generating nested markup with dynamic, user-input data. (Note that this new API isn’t available yet in Safari, but hopefully, it will be soon)

Trusted Types not being everywhere is fine. You can use them where they’re supported and get early warning of issues. Security improves as browsers improve, and usage turns into an incentive for lagging engines (source)

Suppose you want a library to build your app template strings without using textContent manually. In that case, you can use a library like DOMPurify, which uses Trusted Types API under the hood.

#3: “Frameworks Provide DOM Diffing and DOM Diffing is Necessary”

The most common criticism was the lack of DOM Diffing in vanilla JavaScript.

A reactive UI/diff engine is non-negotiable for me.

Diffing is exactly what you need to do (barring newer methods like svelte) to figure out what to tell the browser to change. The vdom tree is much faster to manipulate than DOM nodes.

However, I think this is a much more balanced take:

Diffing seems necessary when your UI gets complicated to the level that a small change requires a full page re-render. However, I don’t think this is necessary for at least 95% of the websites on the internet.

I agree most websites and web apps don’t suffer from this issue, even when re-rendering the needed components based on vanilla’s application state like a framework.

Lastly, I’ll note that DOM diffing is inefficient for getting reactive updates because it doubles up data structures. Lit, Svelte, Stencil, Solid, and many others don’t need it and are way more performant as a result. These approaches win on performance and memory use, which matters because garbage collection hurts the UX.

Modern frameworks necessitate that you render the entire App client-side which makes your apps slow by default.

My issue with modern frameworks forcing declarative UI (see #1) and DOM diffing (see #2) approach is that they necessitate unnecessary rendering and slow startup times. Remix is trying to avoid this by rendering server-side then “hydrating,” and new approaches like Quik are trying not to have hydration altogether. It’s an industry-wide problem, and people are trying to address it.

In my vanilla JavaScript projects, I only re-render the most minimal parts of the page necessary. Template strings everywhere, and especially adding DOM diffing, is inefficient. It forces you to render all of your App client-side increasing startup time and the amount the client has to do overall each time data changes.

That said, if you do need DOM diffing in parts of a vanilla app, libraries like morphdom do just that. There is also a fantastic templating library called Lit-html that solves this problem of making your App more declarative in a tiny package (~3KB), and you can continue using template strings with that.

#4: “Frameworks Scale, Vanilla JavaScript Will Never Scale”

I have built many large vanilla JavaScript projects and scaled them across developers, making the companies I worked for tons of money, and these apps still exist today. 🕺✨

Conventions and idioms are always needed, no matter if you build on top of a framework or not.

At the end of the day, your codebase will only be only as good as your team, not the framework.

The way vanilla JS scales is the same way any framework scales. You have to have intelligent people talk about the needs of the codebase and project.

App Architecture Branch:

That said, here’s an example of adding ~20 lines of structure to the code in the app architecture branch. It splits the code into a TodoList and App component. Each component implements a render method that optionally renders a filtered view of the data.

Overall I’d argue these solutions are more performant, less code (~200 lines only), and more straightforward than most, if not all, the TodoMVC implementations on the internet without a framework.

Here are Eight Vanilla JavaScript Tips from the Code

#1. Sanitization

User input must be sanitized before being displayed in the HTML to prevent XSS (Cross-Site Scripting). Therefore new todo titles are added to the template string using textContent:

li.querySelector('label').textContent = todo.title;

#2. Event Delegation

Since we render the todos frequently, it doesn’t make sense to bind event listeners and clean them up every time. Instead, we bind our events to the parent list that always exists in the DOM and infer which todo was clicked or edited by setting the data attribute of the item $li.dataset.id = todo.id;

Event delegation uses the matches selector:

export const delegate = (el, selector, event, handler) => {
  el.addEventListener(event, e => {
    if (e.target.matches(selector)) handler(e, el);
  });
}

When something inside the list is clicked, we read that data attribute id from the inner list item and use it to grab the todo from the model:

delegate(App.$.list, selector, event, e => {
  let $el = e.target.closest('[data-id]');
  handler(Todos.get($el.dataset.id), $el, e);
});

#3. insertAdjacentHTML

insertAdjacentHTML is much faster than innerHTML because it doesn’t have to destroy the DOM first before inserting.

export const insertHTML = (el, html) => {
  el.insertAdjacentHTML("afterbegin", html);
}

Bonus tip: Jonathan Neal taught me through a PR that you can empty elements and replace the contents with el.replaceChildren() — thanks Jonathan!

#4. Grouping DOM Selectors & Methods

DOM selectors and modifications are scoped to the App.$.* namespace. In a way, it makes it self-documenting what our App could potentially modify in the document.

$: {
  input: document.querySelector('[data-todo="new"]'),
  toggleAll: document.querySelector('[data-todo="toggle-all"]'),
  clear: document.querySelector('[data-todo="clear-completed"]'),
  list: document.querySelector('[data-todo="list"]'),
  count: document.querySelector('[data-todo="count"]'),
  showMain(show) {
    document.querySelector('[data-todo="main"]').style.display = show ? 'block': 'none';
  },
  showFooter(show) {
    document.querySelector('[data-todo="main"]').style.display = show ? 'block': 'none';
  },
  showClear(show) {
    App.$.clear.style.display = show ? 'block': 'none';
  },
  setActiveFilter(filter) {
    document.querySelectorAll('[data-todo="filters"] a').forEach(el => el.classList.remove('selected')),
    document.querySelector(`[data-todo="filters"] [href="#/${filter}"]`).classList.add('selected');
  },
  displayCount(count) {
    replaceHTML(App.$.count, `
      <strong>${count}</strong>
      ${count === 1 ? 'item' : 'items'} left
    `);
  }
},

#5. Send Events on a Class Instance with Subclassing EventTarget

We can subclass EventTarget to send out events on a class instance for our App to bind to:

export const TodoStore = class extends EventTarget {

In this case, when the store updates, it sends an event:

this.dispatchEvent(new CustomEvent('save'));

The App listens to that event and re-renders itself based on the new store data:

Todos.addEventListener('save', App.render);

#6. Group Setting Up Event Listeners

It is essential to know exactly where the global event listeners are set. An excellent place to do that is in the App init method:

init() {
  Todos.addEventListener('save', App.render);
  App.filter = getURLHash();
  window.addEventListener('hashchange', () => {
    App.filter = getURLHash();
    App.render();
  });
  App.$.input.addEventListener('keyup', e => {
    if (e.key === 'Enter' && e.target.value.length) {
      Todos.add({ title: e.target.value, completed: false, id: "id_" + Date.now() })
      App.$.input.value = '';
    }
  });
  App.$.toggleAll.addEventListener('click', e => {
    Todos.toggleAll();
  });
  App.$.clear.addEventListener('click', e => {
    Todos.clearCompleted();
  });
  App.bindTodoEvents();
  App.render();
},

Here we set up all the global event listeners, subscribe to the store mentioned above, and then initially render the App.

Similarly, when you create new DOM elements and insert them into the page, group the event listeners associated with the new elements near where they are made.

#7. Use Data Attributes in Markup & Selectors

One issue with JavaScript is your selectors get tightly coupled to the generated DOM.

To fix this, classes should be used for CSS rules, and data atributes for JavaScript behavior.

<div data-jsmodule="behavior"></div>
document.querySelector('[data-jsmodule="behavior"]')

#8. Render the State of the World Based on Data (Data Flowing Down)

Lastly, to reiterate what I said above, render everything based on the state in the render() method. This is a pattern lifted from modern frameworks.

Make sure you update the DOM based on your App state, not the other way around.

It’s even better if you avoid reading DOM to derive any part of your app state aside from finding your target for event delegation.

Side note: I like to rely on the server to generate the markup for faster boot times, then take control of the bits we show. Have the CSS initially hide things you don’t need, and then have the JavaScript show the elements based on the state. Let the server do most of the work where you can, rather than wait for the entire App to render client-side.

In Conclusion

Vanilla JS is Viable Today for Building Web Apps

JavaScript is better today than it has ever been.

The fact that I could shave off 80% of the code over the previous TodoMVC years ago at the drop of a hat feels terrific. Plus, we now have established design patterns that we can lift from modern frameworks to apply to vanilla JavaScript projects to make our UIs as declarative as we like.

As an industry we should consider pure JavaScript as an option for more projects.

Finally, as Web Components get more ergonomic, we will even have a way to share our code in an interoperable and framework-agnostic way.

I hope you enjoyed the post. Please send your feedback to me @1marc on Twitter. Cheers!


Bonus: Performant Rendering of Large Lists

The code for rendering the entire list contents on model change is clean because data flows down, but it potentially will not be as performant for rendering large lists.

More Performant & Granular DOM Updates with Vanilla

Here’s a branch sending specific events with context from the model so we can make DOM updates more selectively as we need them: (see granular DOM updates diff).

performant-rendering branch

More Performant DOM Updates with lit-html (plus animations!)

We can acheieve the same performant DOM updates with far less code by adopting lit-html using the repeat directive: (see adding lit-html diff).

animation-lithtml branch


Article Series

]]>
https://frontendmasters.com/blog/vanilla-javascript-todomvc/feed/ 0 7