Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Fri, 19 Jul 2024 18:40:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 225069128 Introducing Svelte 5 https://frontendmasters.com/blog/introducing-svelte-5/ https://frontendmasters.com/blog/introducing-svelte-5/#comments Fri, 19 Jul 2024 18:40:23 +0000 https://frontendmasters.com/blog/?p=3067 Svelte has always been a delightful, simple, and fun framework to use. It’s a framework that’s always prioritized developer experience (DX), while producing a light and fast result with minimal JavaScript. It achieves this nice DX by giving users dirt simple idioms and a required compiler that makes everything work. Unfortunately, it used to be fairly easy to break Svelte’s reactivity. It doesn’t matter how fast a website is if it’s broken.

These reliability problems with reactivity are gone in Svelte 5.

In this post, we’ll get into the exciting Svelte 5 release (in Beta at the time of this writing). Svelte is the latest framework to add signals to power their reactivity. Svelte is now every bit as capable of handling robust web applications, with complex state, as alternatives like React and Solid. Best of all, it achieved this with only minimal hits to DX. It’s every bit as fun and easy to use as it was, but it’s now truly reliable, while still producing faster and lighter sites.

Let’s jump in!

The Plan

Let’s go through various pieces of Svelte, look at the “old” way, and then see how Svelte 5 changes things for the better. We’ll cover:

  1. State
  2. Props
  3. Effects

If you find this helpful, let me know, as I’d love to cover snippets and Svelte’s exciting new fine-grained reactivity.

As of this writing, Svelte 5 is late in the Beta phase. The API should be stable, although it’s certainly possible some new things might get added.

The docs are also still in beta, so here’s a preview URL for them. Svelte 5 might be released when you read this, at which point these docs will be on the main Svelte page. If you’d like to see the code samples below in action, you can find them in this repo.

State

Effectively managing state is probably the most crucial task for any web framework, so let’s start there.

State used to be declared with regular, plain old variable declarations, using let.

let value = 0;

Derived state was declared with a quirky, but technically valid JavaScript syntax of $:. For example:

let value = 0;
$: doubleValue = value * 2;

Svelte’s compiler would (in theory) track changes to value, and update doubleValue accordingly. I say in theory since, depending on how creatively you used value, some of the re-assignments might not make it to all of the derived state that used it.

You could also put entire code blocks after $: and run arbitrary code. Svelte would look at what you were referencing inside the code block, and re-run it when those things changed.

$: {
  console.log("Value is ", value);
}

Stores

Those variable declarations, and the special $: syntax was limited to Svelte components. If you wanted to build some portable state you could define anywhere, and pass around, you’d use a store.

We won’t go through the whole API, but here’s a minimal example of a store in action. We’ll define a piece of state that holds a number, and, based on what that number is at anytime, spit out a label indicating whether the number is even or odd. It’s silly, but it should show us how stores work.

import { derived, writable } from "svelte/store";

export function createNumberInfo(initialValue: number = 0) {
  const value = writable(initialValue);

  const derivedInfo = derived(value, value => {
    return {
      value,
      label: value % 2 ? "Odd number" : "Even number",
    };
  });

  return {
    update(newValue: number) {
      value.set(newValue);
    },
    numberInfo: derivedInfo,
  };
}

Writable stores exist to write values to. Derived stores take one or more other stores, read their current values, and project a new payload. If you want to provide a mechanism to set a new value, close over what you need to. To consume a store’s value, prefix it with a $ in a Svelte component. It’s not shown here, but there’s also a subscribe method on stores, and a get import. If the store returns an object with properties, you can either “dot through” to them, or you can use a reactive assignment ($:) to get those nested values. The example below shows both, and this distinction will come up later when we talk about interoperability between Svelte 4 and 5.

<script lang="ts">
  import { createNumberInfo } from './numberInfoStore';

  let store = createNumberInfo(0);

  $: ({ numberInfo, update } = store);
  $: ({ label, value } = $numberInfo);
</script>

<div class="flex flex-col gap-2 p-5">
  <span>{$numberInfo.value}</span>
  <span>{$numberInfo.label}</span>
  <hr />
  <span>{value}</span>
  <span>{label}</span>

  <button onclick={() => update($numberInfo.value + 1)}>
    Increment count
  </button>
</div>

This was the old Svelte.

This is a post on the new Svelte, so let’s turn our attention there.

State in Svelte 5

Things are substantially simpler in Svelte 5. Pretty much everything is managed by something new called “runes.” Let’s see what that means.

Runes

Svelte 5 joins the increasing number of JavaScript frameworks that use the concept of signals. There is a new feature called runes and under the covers they use signals. These accomplish a wide range of features from state to props and even side effects. Here’s a good introduction to runes.

To create a piece of state, we use the $state rune. You don’t import it, you just use it — it’s part of the Svelte language.

let count = $state(0);

For values with non-inferable types, you can provide a generic

let currentUser = $state<User | null>(null);

What if you want to create some derived state? Before we did:

$: countTimes2 = count * 2;

In Svelte 5 we use the $derived rune.

let countTimes2 = $derived(count * 2);

Note that we pass in a raw expression. Svelte will run it, see what it depends on, and re-run it as needed. There’s also a $derived.by rune if you want to pass an actual function.

If you want to use these state values in a Svelte template, you just use them. No need for special $ syntax to prefix the runes like we did with stores. You reference the values in your templates, and they update as needed.

If you want to update a state value, you assign to it:

count = count + 1;
// or count++;

What about stores?

We saw before that defining portable state outside of components was accomplished via stores. Stores are not deprecated in Svelte 5, but there’s a good chance they’re on their way out of the framework. You no longer need them, and they’re replaced with what we’ve already seen. That’s right, the $state and $derived runes we saw before can be defined outside of components in top-level TypeScript (or JavaScript) files. Just be sure to name your file with a .svelte.ts extension, so the Svelte compiler knows to enable runes in these files. Let’s take a look!

Let’s re-implement our number / label code from before, in Svelte 5. This is what it looked like with stores:

import { derived, writable } from "svelte/store";

export function createNumberInfo(initialValue: number = 0) {
  const value = writable(initialValue);

  const derivedInfo = derived(value, value => {
    return {
      value,
      label: value % 2 ? "Odd number" : "Even number",
    };
  });

  return {
    update(newValue: number) {
      value.set(newValue);
    },
    numberInfo: derivedInfo,
  };
}

Here it is with runes:

export function createNumberInfo(initialValue: number = 0) {
  let value = $state(initialValue);
  let label = $derived(value % 2 ? "Odd number" : "Even number");

  return {
    update(newValue: number) {
      value = newValue;
    },
    get value() {
      return value;
    },
    get label() {
      return label;
    },
  };
}

It’s 3 lines shorter, but more importantly, much simpler. We declared our state. We computed our derived state. And we send them both back, along with a method that updates our state.

You may be wondering why we did this:

  get value() {
    return value;
  },
  get label() {
    return label;
  }

rather than just referencing those properties. The reason is that reading that state, at any given point in time, evaluates the state rune, and, if we’re reading it in a reactive context (like a Svelte component binding, or inside of a $derived expression), then a subscription is set up to update any time that piece of state is updated. If we had done it like this:

// this won't work
return {
  update(newValue: number) {
    value = newValue;
  },
  value,
  label,
};

That wouldn’t have worked because those value and label pieces of state would be read and evaluated right there in the return value, with those raw values getting injected into that object. They would not be reactive, and they would never update.

That’s about it! Svelte 5 ships a few universal state primitives which can be used outside of components and easily constructed into larger reactive structures. What’s especially exciting is that Svelte’s component bindings are also updated, and now support fine-grained reactivity that didn’t used to exist.

Props

Defining state inside of a component isn’t too useful if you can’t pass it on to other components as props. Props are also reworked in Svelte 5 in a way that makes them simpler, and also, as we’ll see, includes a nice trick to make TypeScript integration more powerful.

Svelte 4 props were another example of hijacking existing JavaScript syntax to do something unrelated. To declare a prop on a component, you’d use the export keyword. It was weird, but it worked.

// ChildComponent.svelte
<script lang="ts">
  export let name: string;
  export let age: number;
  export let currentValue: string;
</script>

<div class="flex flex-col gap-2">
  {name} {age}
  <input bind:value={currentValue} />
</div>

This component created three props. It also bound the currentValue prop into the <input>, so it would change as the user typed. Then to render this component, we’d do something like this:

<script lang="ts">
  import ChildComponent from "./ChildComponent.svelte";

  let currentValue = "";
</script>

Current value in parent: {currentValue}
<ChildComponent name="Bob" age={20} bind:currentValue />

This is Svelte 4, so let currentValue = '' is a piece of state that can change. We pass props for name and age, but we also have bind:currentValue which is a shorthand for bind:currentValue={currentValue}. This creates a two-way binding. As the child changes the value of this prop, it propagates the change upward, to the parent. This is a very cool feature of Svelte, but it’s also easy to misuse, so exercise caution.

If we type in the ChildComponent’s <input>, we’ll see currentValue update in the parent component.

Svelte 5 version

Let’s see what these props look like in Svelte 5.

<script lang="ts">
  type Props = {
    name: string;
    age: number;
    currentValue: string;
  };

  let { age, name, currentValue = $bindable() }: Props = $props();
</script>

<div class="flex flex-col gap-2">
  {name} {age}
  <input bind:value={currentValue} />
</div>

The props are defined via the $props rune, from which we destructure the individual values.

let { age, name, currentValue = $bindable() }: Props = $props();

We can apply typings directly to the destructuring expression. In order to indicate that a prop can be (but doesn’t have to be) bound to the parent, like we saw above, we use the $bindable rune, like this

 = $bindable()

If you want to provide a default value, assign it to the destructured value. To assign a default value to a bindable prop, pass that value to the $bindable rune.

let { age = 10, name = "foo", currentValue = $bindable("bar") }: Props = $props();

But wait, there’s more!

One of the most exciting changes to Svelte’s prop handling is the improved TypeScript integration. We saw that you can assign types, above. But what if we want to do something like this (in React)

type Props<T> = {
  items: T[];
  onSelect: (item: T) => void;
};
export const AutoComplete = <T,>(props: Props<T>) => {
  return null;
};

We want a React component that receives an array of items, as well as a callback that takes a single item (of the same type). This works in React. How would we do it in Svelte?

At first, it looks easy.

<script lang="ts">
  type Props<T> = {
    items: T[];
    onSelect: (item: T) => void;
  };

  let { items, onSelect }: Props<T> = $props();
  //         Error here _________^
</script>

The first T is a generic parameter, which is defined as part of the Props type. This is fine. The problem is, we need to instantiate that generic type with an actual value for T when we attempt to use it in the destructuring. The T that I used there is undefined. It doesn’t exist. TypeScript has no idea what that T is because it hasn’t been defined.

What changed?

Why did this work so easily with React? The reason is, React components are functions. You can define a generic function, and when you call it TypeScript will infer (if it can) the values of its generic types. It does this by looking at the arguments you pass to the function. With React, rendering a component is conceptually the same as calling it, so TypeScript is able to look at the various props you pass, and infer the generic types as needed.

Svelte components are not functions. They’re a proprietary bit of code thrown into a .svelte file that the Svelte compiler turns into something useful. We do still render Svelte components, and TypeScript could easily look at the props we pass, and infer back the generic types as needed. The root of the problem, though, is that we haven’t (yet) declared any generic types that are associated with the component itself. With React components, these are the same generic types we declare for any function. What do we do for Svelte?

Fortunately, the Svelte maintainers thought of this. You can declare generic types for the component itself with the generics attribute on the <script> tag at the top of your Svelte component:

<script lang="ts" generics="T">
  type Props<T> = {
    items: T[];
    onSelect: (item: T) => void;
  };

  let { items, onSelect }: Props<T> = $props();
</script>

You can even define constraints on your generic arg:

<script lang="ts" generics="T extends { name: string }">
  type Props<T> = {
    items: T[];
    onSelect: (item: T) => void;
  };

  let { items, onSelect }: Props<T> = $props();
</script>

TypeScript will enforce this. If you violate that constraint like this:

<script lang="ts">
  import AutoComplete from "./AutoComplete.svelte";

  let items = [{ name: "Adam" }, { name: "Rich" }];
  let onSelect = (item: { id: number }) => {
    console.log(item.id);
  };
</script>

<div>
  <AutoComplete {items} {onSelect} />
</div>

TypeScript will let you know:

Type '(item: { id: number; }) => void' is not assignable to type '(item: { name: string; }) => void'. Types of parameters 'item' and 'item' are incompatible.

Property 'id' is missing in type '{ name: string; }' but required in type '{ id: number; }'.

Effects

Let’s wrap up with something comparatively easy: side effects. As we saw before, briefly, in Svelte 4 you could run code for side effects inside of $: reactive blocks

$: {
  console.log(someStateValue1, someStateValue2);
}

That code would re-run when either of those values changed.

Svelte 5 introduces the $effect rune. This will run after state has changed, and been applied to the dom. It is for side effects. Things like resetting the scroll position after state changes. It is not for synchronizing state. If you’re using the $effect rune to synchronize state, you’re probably doing something wrong (the same goes for the useEffect hook in React).

The code is pretty anti-climactic.

$effect(() => {
  console.log("Current count is ", count);
});

When this code first starts, and anytime count changes, you’ll see this log. To make it more interesting, let’s pretend we have a current timestamp value that auto-updates:

let timestamp = $state(+new Date());
setInterval(() => {
  timestamp = +new Date();
}, 1000);

We want to include that value when we log, but we don’t want our effect to run whenever our timestamp changes; we only want it to run when count changes. Svelte provides an untrack utility for that

import { untrack } from "svelte";

$effect(() => {
  let timestampValue = untrack(() => timestamp);
  console.log("Current count is ", count, "at", timestampValue);
});

Interop

Massive upgrades where an entire app is updated to use a new framework version’s APIs are seldom feasible, so it should come as no surprise that Svelte 5 continues to support Svelte 4. You can upgrade your app incrementally. Svelte 5 components can render Svelte 4 components, and Svelte 4 components can render Svelte 5 components. The one thing you can’t do is mix and match within a single component. You cannot use reactive assignments $: in the same component that’s using Runes (the Svelte compiler will remind you if you forget).

Since stores are not yet deprecated, they can continue to be used in Svelte 5 components. Remember the createNumberInfo method from before, which returned an object with a store on it? We can use it in Svelte 5. This component is perfectly valid, and works.

<script lang="ts">
  import { createNumberInfo } from '../svelte4/numberInfoStore';

  const numberPacket = createNumberInfo(0);

  const store = numberPacket.numberInfo;
  let junk = $state('Hello');
</script>

<span>Run value: {junk}</span>
<div>Number value: {$store.value}</div>

<button onclick={() => numberPacket.update($store.value + 1)}>Update</button>

But the rule against reactive assignments still holds; we cannot use one to destructure values off of stores when we’re in Svelte 5 components. We have to “dot through” to nested properties with things like {$store.value} in the binding (which always works) rather than:

$: ({ value } = $store);

… which generates the error of:

$: is not allowed in runes mode, use $derived or $effect instead

The error is even clear enough to give you another alternative to inlining those nested properties, which is to create a $derived state:

let value = $derived($store.value);
// or let { value } = $derived($store);

Personally I’m not a huge fan of mixing the new $derived primitive with the old Svelte 4 syntax of $store, but that’s a matter of taste.

Parting thoughts

Svelte 5 has shipped some incredibly exciting changes. We covered the new, more reliable reactivity primitives, the improved prop management with tighter TypeScript integration, and the new side effect primitive. But we haven’t come closing to covering everything. Not only are there more variations on the $state rune, but Svelte 5 also updated it’s event handling mechanism, and even shipped an exciting new way to re-use “snippets” of HTML.

Svelte 5 is worth a serious look for your next project.

]]>
https://frontendmasters.com/blog/introducing-svelte-5/feed/ 1 3067
Introducing Drizzle https://frontendmasters.com/blog/introducing-drizzle/ https://frontendmasters.com/blog/introducing-drizzle/#respond Mon, 17 Jun 2024 11:28:41 +0000 https://frontendmasters.com/blog/?p=2708 This is a post about an exciting new ORM tool (that’s “object relational mapper”) that is different than any ORM I’ve used before—and I’ve used quite a few! Spoiler: it’s Drizzle.

Wait, what’s an ORM?

Even if you’re using a non-relational database now (think MongoDB or Redis), sooner or later you’ll likely need a relational DB.

Knowing SQL is an essential skill for any software engineer, but writing SQL directly can be tricky! The tooling is usually primitive, with only minimal auto-complete to guide you, and you invariably go through a process of running your query, correcting errors, and repeating until you get it right.

ORMs try to help you with this process of crafting SQL. Typically, you tell the ORM about the shape of your DB and it exposes APIs to do typical things. If you have a books table in your DB, an ORM will give you an API for it where you can do stuff like:

const longBooks = books.find({ pages: { gt: 500 } });

Behind the scenes, the SQL created might be something like:

SELECT * FROM books WHERE pages > 500;

That may not look like a massive simplification, but as the parameters get more complex or strung together, the SQL can get a bit mindbending. Not to mention the ORM keeps that code in a language you’re likely already using, like JavaScript.

This ease of use may seem nice, but it can cause other problems. For example, you might struggle figuring out how to do non-trivial queries. And there are performance foot-guns, such as the infamous Select N + 1 problem, which you might cause without realizing it due to the abstracted away syntax.

Why Drizzle is Different

Drizzle takes a novel approach. Drizzle does provide you a traditional ORM querying API, like we saw above. But in addition to that, it also provides an API that is essentially a layer of typing on top of SQL itself. So rather than what we saw before, we might query our books table like this

const longBooks = await db
  .select()
  .from(books)
  .where(gt(books.pages, 500));

It’s more lines, but it’s closer to actual SQL, which provides us some nice benefits: it’s easier to learn, more flexible, and avoids traditional ORM footguns.

Let’s dive in and look closer. This post will take a brief overview of setting up Drizzle, and querying, and then do a deeper dive showing off some of its powerful abilities with this typed SQL querying API. The docs are here if you’d like to look closer at anything.

Using Drizzle in general, and some of the advanced things we’ll cover in this post requires a decent knowledge of SQL. If you’ve never, ever used SQL, you might struggle with a few of the things we discuss later on. That’s expected. Skim and jump over sections as needed. If nothing else, hopefully this post will motivate you to look at SQL.

Setting up the Schema

Drizzle can’t do much of anything if it doesn’t know about your database. There’s lots of utilities for showing Drizzle the structure (or schema) of your tables. We’ll take a very brief look, but a more complete example can be found here.

Drizzle supports Postgres, MySQL, and SQLite. The ideas are the same either way, but we’ll be using MySQL.

Let’s start to set up a table.

import { int, json, mysqlTable, varchar } from "drizzle-orm/mysql-core";

export const books = mysqlTable("books", {
  id: int("id").primaryKey().autoincrement(),
  userId: varchar("userId", { length: 50 }).notNull(),
  isbn: varchar("isbn", { length: 25 }),
  pages: int("pages"),
});

We tell Drizzle about our columns (we won’t show all of them here), and their data types.

Now we can run queries:

const result = await db
  .select()
  .from(books)
  .orderBy(desc(books.id))
  .limit(1);

This query returns an array of items which match the schema we provided Drizzle for this table.

First Query

Alternatively, as expected, we can also narrow our select list.

const result = await db
  .select({ id: books.id, isbn: books.isbn })
  .from(books)
  .orderBy(desc(books.id))
  .limit(1);

Note that the types of the columns match whatever we define in the schema. We won’t go over every possible column type (check the docs), but let’s briefly look at the JSON type:

export const books = mysqlTable("books", {
  id: int("id").primaryKey().autoincrement(),
  userId: varchar("userId", { length: 50 }).notNull(),
  isbn: varchar("isbn", { length: 25 }),
  pages: int("pages"),
  authors: json("authors"),
});

This adds an authors field to each book. But the type might not be what you want. Right now it’s unknown. This makes sense: JSON can have just about any structure. Fortunately, if you know your json column will have a predictable shape, you can specify it, like this:  

export const books = mysqlTable("books", {
  id: int("id").primaryKey().autoincrement(),
  userId: varchar("userId", { length: 50 }).notNull(),
  isbn: varchar("isbn", { length: 25 }),
  pages: int("pages"),
  authors: json("authors").$type<string[]>(),
});

And now, when we check, the authors property is of type string[] | null.

Typed JSON

If you were to mark the authors column as notNull() it would be typed as string[]. As you might expect, you can pass any type you’d like into the $type helper.

Query Whirlwind Tour

Let’s run a non-trivial, but still basic query to see what Drizzle looks like in practice. Let’s say we’re looking to find some nice beach reading for the summer. We want to find books that belong to you (userId == “123”), and is either less than 150 pages, or was written by Stephan Jay Gould. We want the first ten, and we want them sort from most recently added to least recently added (the id key is auto-numbered, so we can sort on that for the same effect)

In SQL we’d do something like this:

SELECT *
FROM books
WHERE userId = '123' AND (pages < 150 OR authors LIKE '%Stephen Jay Gould%')
ORDER BY id desc
LIMIT 10

With Drizzle we’d write this:

const result = await db
  .select()
  .from(books)
  .where(
    and(
      eq(books.userId, userId),
      or(lt(books.pages, 150), like(books.authors, "%Stephen Jay Gould%"))
    )
  )
  .orderBy(desc(books.id))
  .limit(10);

Which works!

[
  {
    "id": 1088,
    "userId": "123",
    "authors": ["Siry, Steven E"],
    "title": "Greene: Revolutionary General (Military Profiles)",
    "isbn": "9781574889130",
    "pages": 144
  },
  {
    "id": 828,
    "userId": "123",
    "authors": ["Morton J. Horwitz"],
    "title": "The Warren Court and the Pursuit of Justice",
    "isbn": "0809016257",
    "pages": 144
  },
  {
    "id": 506,
    "userId": "123",
    "authors": ["Stephen Jay Gould"],
    "title": "Bully for Brontosaurus: Reflections in Natural History",
    "isbn": "039330857X",
    "pages": 544
  },
  {
    "id": 412,
    "userId": "123",
    "authors": ["Stephen Jay Gould"],
    "title": "The Flamingo's Smile: Reflections in Natural History",
    "isbn": "0393303756",
    "pages": 480
  },
  {
    "id": 356,
    "userId": "123",
    "authors": ["Stephen Jay Gould"],
    "title": "Hen's Teeth and Horse's Toes: Further Reflections in Natural History",
    "isbn": "0393311031",
    "pages": 416
  },
  {
    "id": 319,
    "userId": "123",
    "authors": ["Robert J. Schneller"],
    "title": "Cushing: Civil War SEAL (Military Profiles)",
    "isbn": "1574886967",
    "pages": 128
  }
]

The Drizzle version was actually a little bit longer. But we’re not optimizing for fewest possible lines of code. The Drizzle version is typed, with autocomplete to guide you toward a valid query, and TypeScript to warn you when you miss. The query is also a lot more composable. What do I mean by that?

Composability: Putting Queries Together

Let’s write something slightly more advanced and slightly more realistic. Let’s code up a function that takes any number of search filters, and puts together a query. Here’s what the filters look like

type SearchPacket = Partial<{
  title: string;
  author: string;
  maxPages: number;
  subjects?: number[];
}>;

Note the Partial type. We’re taking in any number of these filters—possibly none of them. Whichever filters are passed, we want them to be additive; we want them combined with and. We’ve seen and already, and it can take the result of calls to eqlt, and lots of others. We’ll need to create an array of all of these filters, and Drizzle gives us a parent type that can hold any of them: SQLWrapper.

Let’s get started.

async function searchBooks(args: SearchPacket) {
  const searchConditions: SQLWrapper[] = [];
}

We’ve got our array of filters. Now let’s start filling it up.

async function searchBooks(args: SearchPacket) {
  const searchConditions: SQLWrapper[] = [];
  if (args.title) {
    searchConditions.push(like(books.title, `%${args.title}%`));
  }
}

Nothing new, yet. This is the same filter we saw before with authors.

Speaking of authors, let’s add that query next. But let’s make the author check a little more realistic. It’s not a varchar column, it holds JSON values, which themselves are strings of arrays. MySQL gives us a way to search JSON: the ->> operator. This takes a JSON column, and evaluates a path on it. So if you had objects in there, you’d pass string paths to get properties out. We just have an array of strings, so our path is $, which is the actual values in the array. And the string comparrisons when we’re filtering on JSON columns like this is no longer case insensitive, so we’ll want to use the LOWER function in MySQL.

Typically, with traditional ORM’s you’d scramble to the docs to look for an equivalent to the ->> operator, as well as the LOWER function. Drizzle does something better, and gives us a nice escape hatch to just write SQL directly in situations like this. Let’s implement our authors filter.

async function searchBooks(args: SearchPacket) {
  const searchConditions: SQLWrapper[] = [];
  if (args.title) {
    searchConditions.push(like(books.title, `%${args.title}%`));
  }
  if (args.author) {
    searchConditions.push(
      sql`LOWER(${books.authors}->>"$") LIKE ${`%${args.author.toLowerCase()}%`}`
    );
  }
}

Note the sql tagged template literal. It lets us put arbitrary SQL in for one-off operations that may not be implemented in the ORM. Before moving on, let’s take a quick peak at the SQL generated by this:

{
  "sql": "select `id`, `userId`, `authors`, `title`, `isbn`, `pages` from `books` where (`books`.`userId` = ? and LOWER(`books`.`authors`->>\"$\") LIKE ?) order by `books`.`id` desc limit ?",
  "params": ["123", "%gould%", 10]
}

Let’s zoom in on the authors piece. What we entered as…

sql`LOWER(${books.authors}->>"$") LIKE ${`%${args.author.toLowerCase()}%`}`

… gets transformed as:

LOWER(`books`.`authors`->>"$") LIKE ?

Our search term was parameterized; however, Drizzle was smart enough to not parameterize our column. I’m continuously impressed by small details like this. The maxPages piece is the same as before

if (args.maxPages) {
  searchConditions.push(lte(books.pages, args.maxPages));
}

Nothing new or interesting. Now let’s look at the subjects filter. We can pass in an array of subject ids, and we want to filter books that have that subject. The relationship between books and subjects is stored in a separate table, booksSubjects. This table simply has rows with an id, a book id, and a subject id (and also the userId for that book, to make other queries easier).

So if book 12 has subject 34, there’ll be a row with bookId of 12, and subjectId of 34.

In SQL when we want to see if a given row exists in some table, we use the exists keyword, and Drizzle has an exists function for this very purpose. Let’s move on with our function

async function searchBooks(args: SearchPacket) {
  const searchConditions: SQLWrapper[] = [];
  if (args.title) {
    searchConditions.push(like(books.title, `%${args.title}%`));
  }
  if (args.author) {
    searchConditions.push(
      sql`LOWER(${books.authors}->>"$") LIKE ${`%${args.author.toLowerCase()}%`}`,
    );
  }
  if (args.maxPages) {
    searchConditions.push(lte(books.pages, args.maxPages));
  }
  if (args.subjects?.length) {
    searchConditions.push(
      exists(
        db
          .select({ _: sql`1` })
          .from(booksSubjects)
          .where(
            and(
              eq(books.id, booksSubjects.book),
              inArray(booksSubjects.subject, args.subjects),
            ),
          ),
      ),
    );
  }

We can pass an exists() call right into our list of filters, just like with real SQL. This bit:

_: sql`1`

… is curious, but that’s just us saying SELECT 1 which is a common way of putting something into a SELECT list, even though we’re not pulling back any data; we’re just checking for existence. Lastly, the inArray Drizzle helper is how we generate an IN query. Here’s what the generated SQL looks like for this subjects query:

select `id`, `userId`, `authors`, `title`, `isbn`, `pages`
from `books`
where (`books`.`userId` = ? and exists (select 1
                                        from `books_subjects`
                                        where (`books`.`id` = `books_subjects`.`book` and
                                               `books_subjects`.`subject` in (?, ?))))
order by `books`.`id` desc
limit ?

That was our last filter. Now we can pipe our filters in to execute the query we put together.

async function searchBooks(args: SearchPacket) {
  const searchConditions: SQLWrapper[] = [];
  if (args.title) {
    searchConditions.push(like(books.title, `%${args.title}%`));
  }
  if (args.author) {
    searchConditions.push(
      sql`LOWER(${books.authors}->>"$") LIKE ${`%${args.author.toLowerCase()}%`}`
    );
  }
  if (args.maxPages) {
    searchConditions.push(lte(books.pages, args.maxPages));
  }
  if (args.subjects?.length) {
    searchConditions.push(
      exists(
        db
          .select({ _: sql`1` })
          .from(booksSubjects)
          .where(
            and(
              eq(books.id, booksSubjects.book),
              inArray(booksSubjects.subject, args.subjects)
            )
          )
      )
    );
  }

  const result = await db
    .select()
    .from(books)
    .where(and(eq(books.userId, userId), ...searchConditions))
    .orderBy(desc(books.id))
    .limit(10);
}

The ability to treat SQL queries as typed function calls that can be combined arbitratily is what makes Drizzle shine.

Digging deeper

We could end the post here, but let’s go further and see how Drizzle handles something fairly complex. You might never need (or want to) write queries like this. My purpose in including this section is to show that you can, if you ever need to.

With that out of the way, let’s write a query to get aggregate info about our books. We want our most and least popular subject(s), and how many books we have with those subjects. We also want to know any unused subjects, as well as that same info about tags (which we haven’t talked about). And also the total number of books we have overall. This data might be displayed in a screen like this.

aggregate screen
Screenshot

To keep this section manageable we’ll just get the book counts and the most and least subjects. The other pieces are variations on that theme. You can see the finished product here.

Let’s look at some of the SQL for this and how to write it with Drizzle.

Number of books per subject

In SQL we can group things together with GROUP BY.

SELECT
    subject,
    count(*)
FROM books_subjects
GROUP BY subject

Now our SELECT list, rather than pulling items from a table, is now pulling from a (conceptual) lookup table. We (conceptually) have a bunch of buckets stored by subject id. So we can select those subject id’s, as well as aggregate info from the buckets themselves, which we do with the count(*). This selects each subject, and the number of books under that subject.

And it works:

Group by

But we want the most, and least popular subjects. SQL also has what are called window functions. We can, on the fly, sort these buckets in some order, and then ask questions about the data, sorted in that way. We basically want the subject(s) with the highest, or lowest number of books, including ties. It turns out RANK is exactly what we want. Let’s see how this works

SELECT
    subject,
    count(*) as count,
    RANK() OVER (ORDER BY count(*) DESC) MaxSubject,
    RANK() OVER (ORDER BY count(*) ASC) MinSubject
FROM books_subjects
WHERE userId = '123'
GROUP BY subject

We ask for the rank of each row, when the whole result set is sorted in whatever way we describe.

rank

Subjects 79, 137 and 150 all have a minSubject rank of 1, which means they are the least used subject, which makes sense since there’s only one book with that subject.

It’s a little mind bendy at first, so don’t worry if this looks a little weird. The point is to show how well Drizzle can simplify SQL for us, not to be a deep dive into SQL, so let’s move on.

We want the subjects with a MaxSubject of 1, or a MinSubject of 1. We can’t use WHERE for this, at least not directly. The solution in SQL is to turn this query into a virtual table, and query that. It looks like this:

SELECT
    t.subject id,
    CASE WHEN t.MinSubject = 1 THEN 'MinSubject' ELSE 'MaxSubject' END as label,
    t.count
FROM (
    SELECT
        subject,
        count(*) as count,
        RANK() OVER (ORDER BY count(*) DESC) MaxSubject,
        RANK() OVER (ORDER BY count(*) ASC) MinSubject
    FROM books_subjects
    WHERE userId = '123'
    GROUP BY subject
) t
WHERE t.MaxSubject = 1 OR t.MinSubject = 1

And it works.

rank

Moving this along

We won’t show tags, since it’s basically identical except we hit a books_tags table, instead of books_subjects. We also won’t show unused subjects (or tags), which is also very similar, except we use a NOT EXISTS query.

The query to get the total number of books looks like this:

SELECT count(*) as count
FROM books
WHERE userId = '123'

Let’s add some columns to get it in the same structure as our subjects queries:

SELECT
    0 id,
    'Books Count' as label,
    count(*) as count
FROM books
WHERE userId = '123'

Now, since these queries return the same structure, let’s combine them into one big query. We use UNION for this.

SELECT *
FROM (
    SELECT
        t.subject id,
        CASE WHEN t.MinSubject = 1 THEN 'MinSubject' ELSE 'MaxSubject' END as label,
        t.count
    FROM (
        SELECT
            subject,
            count(*) as count,
            RANK() OVER (ORDER BY count(*) DESC) MaxSubject,
            RANK() OVER (ORDER BY count(*) ASC) MinSubject
        FROM books_subjects
        GROUP BY subject
    ) t
    WHERE t.MaxSubject = 1 OR t.MinSubject = 1
) subjects
UNION
    SELECT
        0 id,
        'Books Count' as label,
        count(*) as count
    FROM books
    WHERE userId = '123';

And it works! Phew!

union query

But this is gross to write manually, and even grosser to maintain. There’s a lot of pieces here and there’s no (good) way to break this apart and manage separately. SQL is ultimately text, and you can, of course, generate these various pieces of text with different functions in your code, and then concatenate them together.

But that’s fraught with difficulty too. It’s easy to get small details wrong when you’re pasting strings of code together. And believe it or not, this query is much simpler than much of what I’ve seen.

The Drizzle Way

Let’s see what this looks like in Drizzle. Remember that initial query to get each subject, with its count, and rank? Here it is in Drizzle

const subjectCountRank = () =>
  db
    .select({
      subject: booksSubjects.subject,
      count: sql<number>`COUNT(*)`.as("count"),
      rankMin: sql<number>`RANK() OVER (ORDER BY COUNT(*) ASC)`.as("rankMin"),
      rankMax: sql<number>`RANK() OVER (ORDER BY COUNT(*) DESC)`.as("rankMax"),
    })
    .from(booksSubjects)
    .where(eq(booksSubjects.userId, userId))
    .groupBy(booksSubjects.subject)
    .as("t");

Drizzle supports grouping, and it even has an as function to alias a query, and enable it to be queried from. Let’s do that next.

const subjectsQuery = () => {
  const subQuery = subjectCountRank();

  return db
    .select({
      label:
        sql<string>`CASE WHEN t.rankMin = 1 THEN 'MIN Subjects' ELSE 'MAX Subjects' END`.as(
          "label"
        ),
      count: subQuery.count,
      id: subQuery.subject,
    })
    .from(subQuery)
    .where(or(eq(subQuery.rankMin, 1), eq(subQuery.rankMax, 1)));
};

We stuck our query to get the ranks in a function, and then we just called that function, and queried from its result. SQL is feeling a lot more like normal coding!

The query for the total book count is simple enough.

db
  .select({ label: sql<string>`'All books'`, count: sql<number>`COUNT(*)`, id: sql<number>`0` })
  .from(books)
  .where(eq(books.userId, userId)),

Hopefully we won’t be too surprised to learn that Drizzle has a union function, to union queries together. Let’s see it all together:

const dataQuery = union(
  db
    .select({
      label: sql<string>`'All books'`,
      count: sql<number>`COUNT(*)`,
      id: sql<number>`0`,
    })
    .from(books)
    .where(eq(books.userId, userId)),
  subjectsQuery()
);

Which generates this SQL for us:

(select 'All books', COUNT(*), 0 from `books` where `books`.`userId` = ?)
union
(select CASE WHEN t.rankMin = 1 THEN 'MIN Subjects' ELSE 'MAX Subjects' END as `label`, `count`, `subject`
 from (select `subject`,
              COUNT(*)                             as `count`,
              RANK() OVER (ORDER BY COUNT(*) ASC)  as `rankMin`,
              RANK() OVER (ORDER BY COUNT(*) DESC) as `rankMax`
       from `books_subjects`
       where `books_subjects`.`userId` = ?
       group by `books_subjects`.`subject`) `t`
 where (`rankMin` = ? or `rankMax` = ?))

Basically the same thing we did before, but with a few more parens, plus some userId filtering I left off for clarity.

I left off the tags queries, and the unused subjects/tags queries, but if you’re curious what they look like, the code is all here and the final union looks like this:

const dataQuery = union(
  db
    .select({
      label: sql<string>`'All books'`,
      count: sql<number>`COUNT(*)`,
      id: sql<number>`0`,
    })
    .from(books)
    .where(eq(books.userId, userId)),
  subjectsQuery(),
  unusedSubjectsQuery(),
  tagsQuery(),
  unusedTagsQuery()
);

Just more function calls thrown into the union.

Flexibility

Some of you might wince seeing that many large queries all union‘d together. Those queries are actually run one after the other on the MySQL box. But, for this project it’s a small amount of data and there’s not multiple round trips over the network to do it. Our MySQL engine executes those queries one after the other.

But let’s say you decide you’re better off breaking that union apart, and sending N queries, with each piece, and putting it all together in application code. These queries are already separate function calls. It would be fairly easy to remove those calls from the union, and instead invoke them in isolation (and then modify your application code).

This kind of flexibility is what I love the most about Drizzle. Refactoring large, complex stored procedure has always been a pain with SQL. When you code it through Drizzle, it becomes much more like refactoring a typed programming language like TypeScript or C#.

Debugging Queries

Before we wrap up, let’s take a look at how easily Drizzle let’s you debug your queries. Let’s say the query from earlier didn’t return what we expected, and we want to see the actual SQL being run. We can do that by removing the await from the query, and then calling toSQL on the result.

import { and, desc, eq, like, lt, or } from "drizzle-orm";

const result = db
  .select()
  .from(books)
  .where(
    and(
      eq(books.userId, userId),
      or(lt(books.pages, 150), like(books.authors, "%Stephen Jay Gould%"))
    )
  )
  .orderBy(desc(books.id))
  .limit(10);

console.log(result.toSQL());

This displays the following:

{
  "sql": "select `id`, `userId`, `authors`, `title`, `isbn`, `pages` from `books` where (`books`.`userId` = ? and (`books`.`pages` < ? or `books`.`authors` like ?)) order by `books`.`id` desc limit ?",
  "params": ["123", 150, "%Stephen Jay Gould%", 10]
}

result.toSQL() returned an object, with a sql field with our query, and a params field with the parameters. As any ORM would, Drizzle parameterized our query, so fields with invalid characters wouldn’t break anything. You can now run this query directly against your database to see what went wrong.

Wrapping Up

I hope you’ve enjoyed this introduction to Drizzle. If you’re not afraid of a little SQL, it can make your life a lot easier.

]]>
https://frontendmasters.com/blog/introducing-drizzle/feed/ 0 2708
Testing Types in TypeScript https://frontendmasters.com/blog/testing-types-in-typescript/ https://frontendmasters.com/blog/testing-types-in-typescript/#respond Tue, 04 Jun 2024 14:40:52 +0000 https://frontendmasters.com/blog/?p=2485 Say that 10 times fast.

As your TypeScript usage gets more advanced, it can be extremely helpful to have utilities around that test and verify your types. Like unit testing, but without needing to set up Jest, deal with mocking, etc. In this post, we’ll introduce this idea. Then we’ll dive deeply into one particular testing utility that’s surprisingly difficult to create: a type that checks whether two types are the same.

This post will cover some advanced corners of TypeScript you’re unlikely to need for regular application code. If you’re not a huge fan of TS, please understand that you probably won’t need the things in this post for your everyday work, which is fine. But if you’re writing or maintaining TypeScript libraries, or even library-like code in your team’s app, the things we discuss here might come in handy.

Type helpers

Consider this Expect helper:

type Expect<T extends true> = T;

This type demands you pass true into it. This seems silly, but stay with me.

Now imagine, for some reason, you have a helper for figuring out whether a type is some kind of array:

type IsArray<T> = T extends Array<any> ? true : false;

You’d like to verify that IsArray works properly. You could type:

type X = IsArray<number[]>;

Then mouse over the X and verify that it’s true, which it is. But we don’t settle for ad hoc testing like that with normal code, so why would we with our advanced types?

Why don’t we write this instead:

type X = Expect<IsArray<number[]>>;

If we had messed up our IsArray type, the line above would error out, which we can see by passing the wrong thing into it:

type Y = Expect<IsArray<number>>;
// error: Type 'false' does not satisfy the constraint 'true'.

Better yet, let’s create another helper:

type Not<T extends false> = true;

and now we can actually test the negative of our IsArray helper

type Y = Expect<Not<IsArray<number>>>;

Except these fake type names like X and Y will get annoying very quickly, so let’s do this instead

// ts-ignore just to ignore the unused warning - everything inside of Tests will still type check
// @ts-ignore
type Tests = [
  Expect<IsArray<number[]>>,
  Expect<Not<IsArray<number>>>
];

So far so good. We can put these tests for our types right in our application files if we want, or move them to a separate file; the types are all erased when we ship either way, so don’t worry about bundle size.

Getting serious

Our IsArray type was trivial, as were our tests. In real life, we’ll be writing types that do more interesting things, usually taking in one or more types, and creating something new. And to test those sorts of things, we’ll need to be able to verify that two types are identical.

For example, say you want to write a type that takes in a generic, and if that generic is a function, returns the parameters of that function, else returns never. Pretend the Parameters type is not built into TypeScript, and imagine you write this:

type ParametersOf<T> = T extends (...args: infer U) => any ? U : never;

Which we’d test like this:

// ts-ignore just to ignore the unused warning - everything inside of Tests will still type check
// @ts-ignore
type Tests = [
  Expect<TypesMatch<ParametersOf<(a: string) => void>, [string]>>,
  Expect<TypesMatch<ParametersOf<string>, never>>
];

Great. But how do you write TypesMatch?

That’s the subject of the entire rest of this post. Buckle up!

Type Equality

Checking type equality is surprisingly hard in TypeScript, and the obvious solution will fail for baffling reasons unless you understand conditional types. Let’s tackle that before moving on.

We’ll start with the most obvious, potential solution:

type TypesMatch<T, U> = T extends U ? true : false;

You can think of T extends U in the same way as with object-oriented inheritance: is T the same as, or a sub-type of U. And instead of (just) object-oriented hierarchies, remember that you can have literal types in TypeScript. type Foo = "foo" is a perfectly valid type in TypeScript. It’s the type that represents all strings that match "foo". Similarly, type Foo = "foo" | "bar"; is the type representing all strings that match either "foo", or "bar". And literal types, and unions of literal types like that can both be thought of as sub-types of string, for these purposes.

Another (more common way) to think about this is that T extends U is true if T can be assigned to U, which makes sense; if T is the same, or a sub-type of U, then a variable of type T can be assigned to a variable of type U.

The obvious test works:

type Tests = [Expect<TypesMatch<string, string>>];

So far, so good. And:

type Tests = [Expect<Not<TypesMatch<string, "foo">>>];

This also works, since a variable of type string cannot be assigned to a variable of type "foo".

let foo: "foo" = "foo";
let str: string = "blah blah blah";

foo = str; // Error
// Type 'string' is not assignable to type '"foo"'.

But we hit problems with:

type Tests = [Expect<Not<TypesMatch<"foo", string>>>];

This fails. The string literal type "foo" is assignable to variables of type string.

Just test them both ways

I know what you’re thinking: just test it from both directions!

type TypesMatch<T, U> = T extends U
  ? U extends T
    ? true
    : false
  : false;

This solves both of our problems from above. Now both of these tests pass.

type Tests = [
  Expect<Not<TypesMatch<string, "foo">>>,
  Expect<Not<TypesMatch<"foo", string>>>,
];

Let’s try union types:

type Tests = [
  Expect<TypesMatch<string | number, string | number>>,
  Expect<Not<TypesMatch<string | number, string | number | boolean>>>
];

Both of these fail with:

Type ‘boolean’ does not satisfy the constraint ‘true’

Identical union types fail to match as identical, and different union types fail to match as different. What in the world is happening.

Conditional types over unions

So why did this not work with union types?

type TypesMatch<T, U> = T extends U
  ? U extends T
    ? true
    : false
  : false;

Let’s back up and try to simplify. Let’s imagine some simple (useless) types. Imagine a square and circle:

type Square = {
  length: number;
};
type Circle = {
  radius: number;
};

Now imagine a type that takes a generic in, and returns a description. If we pass in a Square, it returns the string literal type "4 Sides". If we pass in a circle, it returns the string literal "Round". And if we pass in anything else, it returns the string literal type "Dunno". This is very silly but just go with it.

type Description<T> = T extends Square
  ? "4 Sides"
  : T extends Circle
  ? "Round"
  : "Dunno";

Now imagine a function to use this type:

function getDescription<T>(obj: T): Description<T> {
  return null as any;
}
const s: Square = { length: 1 };
const c: Circle = { radius: 1 };

const sDescription = getDescription(s);
const cDescription = getDescription(c);

sDescription is of type "4 Sides" and cDescription is of type "Round". Nothing surprising. Now let’s consider a union type.

const either: Circle | Square = {} as any;
const eitherDescription = getDescription(either);

The type Circle | Square does not extend Square (a variable of type Circle | Square cannot be assigned to a variable of type Square) nor does it extend Circle. So we might naively expect eitherDescription to be "Dunno". But intuitively this feels wrong. either is a Circle or a Square, so the description should be either "4 Sides" or "Round".

And that’s exactly what it is:

Union Type

Distributing union types

When we have a generic type argument that’s also a type union, pushed across an extends check in a conditional type, the union itself is split up with each member of the union substituted into that check. TypeScript then takes every result, and unions them together. That union is the result of that extends operation.

Any never‘s are removed, as are any duplicates.

So for the above, we start with our type:

We substitute the union type that we passed in for T

Once our conditional type hits the extends keyword, if we passed in a union type, TypeScript will distribute over the union; it’ll run that ternary for each type in our union, and then union all of those results together. Square is first:

Square extends Square so "4 Sides" is the result. Then repeat with Circle:

Circle extends Circle so "Round" is the result of the second iteration. The two are union’d together, resulting in the "4 Sides" | "Round" that we just saw.

Playing on Hard Mode

Let’s take a fresh look at this:

type TypesMatch<T, U> = T extends U
  ? U extends T
    ? true
    : false
  : false;

Here’s what happens with:

TypesMatch<string | number, string | number>;

We start:

Then substitute string | number in for T and U. Evaluation starts, and immediately gets to our first extends:

T and U are both unions, but only the type before the extends is distributed. Let’s substitute string | number for U:

Now we’re ready to process that first extends. We’ll need to break up the T union, and run it for every type in that union. string is first:

string does extends string | number so we hit the true branch, and are immediately greeted by a second extends.

Think of it like a nested loop. We’ll process this inner extends in the same way. We’ll substitute each type in the U union, starting with string:

and of course string extends string is true. Our first result is true.

Now let’s continue processing our inner loop. U will next be numbernumber extends string is of course false, which is the second result of our type.

So far we have true | false. Now our outer loop continues. T moves on to become the second member of its union, numbernumber extends string | number is true:

so we again hit that first branch:

The inner loop starts all over again, with U first becoming string:

string extends number is false, so our overall result is true | false | falseU then becomes number:

which yields true.

The whole thing produced true | false | false | true. TypeScript removes the duplicates, leaving us with true | false.

Do you know what a simpler name for the type true | false is? It’s boolean. This type produced boolean, which is why we got the error:

Type ‘boolean’ does not satisfy the constraint ‘true’

We were expecting a literal type of true but got a union of true and false, which reduced to boolean.

It’s the same idea with:

type Result = TypesMatch<string | number, string | number | boolean>;

We won’t go through all the permutations. Suffice it to say we’ll get some mix of true and false, which will reduce back to boolean again.

So how to fix it?

We need to stop the union types from distributing. The distributing happens only with a raw generic type in a conditional type expression. We can turn it off by turning that type into another type, with the same assignability rules. A tuple type does that perfectly:

type TypesMatch<T, U> = [T] extends [U]
  ? [U] extends [T]
    ? true
    : false
  : false;

Instead of asking if T extends U I ask if [T] extends [U][T] is a tuple type with one member, T. Think of it with non-generic types. [string] is essentially an array with a single element (and no more!) that’s a string.

All our normal rules apply. ["foo"] extends [string] is true, while [string] extends ["foo"] is false. You can assign a tuple with a single "foo" string literal to a tuple with a single string, for the same reason that you can assign a "foo" string literal to a variable of type string.

One last fix

We’re pretty much done, don’t worry.

type TypesMatch<T, U> = [T] extends [U]
  ? [U] extends [T]
    ? true
    : false
  : false;

type Tests = [
  Expect<Not<TypesMatch<string, "foo">>>,
  Expect<TypesMatch<string | number, string | number>>,
  Expect<Not<TypesMatch<string | number, string | number | object>>>
];

This mostly works, but there’s one rub: optional properties. This test currently fails

Expect<{ a: number }, { a: number; b?: string }>;

Unfortunately both of those types are assignable to each other, which makes sense. The b in the second type is optional. { a: number; b?: string } can in fact be assigned to a variable expecting { a: number }, since the structure of { a: number } is satisfied. TypeScript is happy to ignore the extra properties. We’re really close though. This only happens if either type has optional properties which are absent from the other type.

This already fails:

Expect<{ a: number; b?: number }, { a: number; b?: string }>;

Those types are not at all compatible. So why don’t we just keep what we already have and verify that all of the property keys are the same. We’ll rename a smidge and wind up with this:

type ShapesMatch<T, U> = [T] extends [U]
  ? [U] extends [T]
    ? true
    : false
  : false;

type TypesMatch<T, U> = ShapesMatch<T, U> extends true
  ? ShapesMatch<keyof T, keyof U> extends true
    ? true
    : false
  : false;

What was TypesMatch is now ShapesMatch and our real TypesMatch calls that, then calls it again on the types’ keys. Don’t be scared of keyof T — that’s safe. This will work on primitive types, function types, etc. So long as the types are the same, the result will be the same. If the types are object types, they’ll match up those object properties.

Wrapping up

Conditional types can be incredibly helpful when you’re building advanced types for testing or otherwise. But they also come with some behaviors that can be surprising to the uninitiated. Hopefully this post made some of that clearer.

]]>
https://frontendmasters.com/blog/testing-types-in-typescript/feed/ 0 2485
Combining React Server Components with react-query for Easy Data Management https://frontendmasters.com/blog/combining-react-server-components-with-react-query-for-easy-data-management/ https://frontendmasters.com/blog/combining-react-server-components-with-react-query-for-easy-data-management/#comments Fri, 24 May 2024 15:27:11 +0000 https://frontendmasters.com/blog/?p=2378 React Server Components (RSC) are an exciting innovation in web development. In this post we’ll briefly introduce them, show what their purpose and benefits are, as well as their limitations. We’ll wrap up by showing how to pair them with react-query to help solve those limitations. Let’s get started!

Why RSC?

React Server Components, as the name implies, execute on the server—and the server alone. To see why this is significant, let’s take a whirlwind tour of how web development evolved over the last 10 or so years.

Prior to RSC, JavaScript frameworks (React, Svelte, Vue, Solid, etc) provided you with a component model for building your application. These components were capable of running on the server, but only as a synchronous operation for stringifying your components’ HTML to send down to the browser so it could server render your app. Your app would then render in the browser, again, at which point it would become interactive. With this model, the only way to load data was as a side-effect on the client. Waiting until your app reached your user’s browser before beginning to load data was slow and inefficient.

To solve this inefficiency, meta-frameworks like Next, SvelteKit, Remix, Nuxt, SolidStart, etc were created. These meta-frameworks provided various ways to load data, server-side, with that data being injected by the meta-framework into your component tree. This code was non-portable, and usually a little awkward. You’d have to define some sort of loader function that was associated with a given route, load data, and then expect it to show up in your component tree based on the rules of whatever meta-framework you were using.

This worked, but it wasn’t without issue. In addition to being framework-specific, composition also suffered; where typically components are explicitly passed props by whichever component renders them, now there are implicit props passed by the meta-framework, based on what you return from your loader. Nor was this setup the most flexible. A given page needs to know what data it needs up front, and request it all from the loader. With client-rendered SPAs we could just render whatever components we need, and let them fetch whatever data they need. This was awful for performance, but amazing for convenience.

RSC bridges that gap and gives us the best of both worlds. We get to ad hoc request whatever data we need from whichever component we’re rendering, but have that code execute on the server, without needing to wait for a round trip to the browser. Best of all, RSC also supports streaming, or more precisely, out-of-order streaming. If some of our data are slow to load, we can send the rest of the page, and push those data down to the browser, from the server, whenever they happen to be ready.

How do I use them?

At time of writing RSC are mostly only supported in Next.js, although the minimal framework Waku also supports it. Remix and TanStack Router are currently working on implementations, so stay tuned. I’ll show a very brief overview of what they look like in Next; consult those other frameworks when they ship. The ideas will be the same, even if the implementations differ slightly.

In Next, when using the new “app directory” (it’s literally a folder called “app” that you define your various routes in), pages are RSC by default. Any components imported by these pages are also RSC, as well as components imported by those components, and so on. When you’re ready to exit server components and switch to “client components,” you put the "use client" pragma at the top of a component. Now that component, and everything that component imports are client components. Check the Next docs for more info.

How do React Server Components work?

React Server Components are just like regular React Components, but with a few differences. For starters, they can be async functions. The fact that you can await asynchronous operations right in the component makes them well suited for requesting data. Note that asynchronous client components are a thing coming soon to React, so this differentiation won’t exist for too long. The other big difference is that these components run only on the server. Client components (i.e. regular components) run on the server, and then re-run on the client in order to “hydrate.” That’s how frameworks like Next and Remix have always worked. But server components run only on the server.

Server components have no hydration, since, again, they only execute on the server. That means you can do things like connect directly to a database, or use Server-only api’s. But it also means there are many things you can’t do in RSCs: you cannot use effects or state, you cannot set up event handlers, or use browser-specific APIs like localStorage. If you violate any of those rules you’ll get errors.

For a more thorough introduction to RSC, check the Next docs for the app directory, or depending on when you read this, the Remix or TanStack Router docs. But to keep this post a reasonable length, let’s keep the details in the docs, and see how we use them.

Let’s put together a very basic proof of concept demo app with RSC, see how data mutations work, and some of their limitations. We’ll then take that same app (still using RSC) and see how it looks with react-query.

The demo app

As I’ve done before, let’s put together a very basic, very ugly web page for searching some books, and also updating the titles of them. We’ll also show some other data on this page: the various subjects, and tags we have, which in theory we could apply to our books (if this were a real web app, instead of a demo).

The point is to show how RSC and react-query work, not make anything useful or beautiful, so temper your expectations 🙂 Here’s what it looks like:

The page has a search input which puts our search term into the url to filter the books shown. Each book also has an input attached to it for us to update that book’s title. Note the nav links at the top, for the RSC and RSC + react-query versions. While the pages look and behave identically as far as the user can see, the implementations are different, which we’ll get into.

The data is all static, but the books are put into a SQLite database, so we can update the data. The binary for the SQLite db should be in that repo, but you can always re-create it (and reset any updates you’ve made) by running npm run create-db.

Let’s dive in.

A note on caching

At time of writing, Next is about to release a new version with radically different caching APIs and defaults. We won’t cover any of that for this post. For the demo, I’ve disabled all caching. Each call to a page, or API endpoint will always run fresh from the server. The client cache will still work, so if you click between the two pages, Next will cache and display what you just saw, client-side. But refreshing the page will always recreate everything from the server.

Loading the data

There are API endpoints inside of the api folder for loading data and for updating the books. I’ve added artificial delays of a few hundred ms for each of these endpoints, since they’re either loading static data, or running simple queries from SQLite. There’s also console logging for these data, so you can see what’s loading when. This will be useful in a bit.

Here’s what the terminal console shows for a typical page load in either the RSC or RSC + react-query version.

Let’s look at the RSC version

RSC Version

export default function RSC(props: { searchParams: any }) {
  const search = props.searchParams.search || "";

  return (
    <section className="p-5">
      <h1 className="text-lg leading-none font-bold">Books page in RSC</h1>
      <Suspense fallback={<h1>Loading...</h1>}>
        <div className="flex flex-col gap-2 p-5">
          <BookSearchForm />
          <div className="flex">
            <div className="flex-[2] min-w-0">
              <Books search={search} />
            </div>
            <div className="flex-1 flex flex-col gap-8">
              <Subjects />
              <Tags />
            </div>
          </div>
        </div>
      </Suspense>
    </section>
  );
}

We have a simple page header. Then we see a Suspense boundary. This is how out-of-order streaming works with Next and RSC. Everything above the Suspense boundary will render immediately, and the Loading... message will show until all the various data in the various components below have finished loading. React knows what’s pending based on what you’ve awaited. The BooksSubjects and Tags components all have fetches inside of them, which are awaited. We’ll look at one of them momentarily, but first note that, even though three different components are all requesting data, React will run them in parallel. Sibling nodes in the component tree can, and do load data in parallel.

But if you ever have a parent / child component which both load data, then the child component will not (cannot) even start util the parent is finished loading. If the child data fetch depends on the parent’s loaded data, then this is unavoidable (you’d have to modify your backend to fix it), but if the data do not depend on each other, then you would solve this waterfall by just loading the data higher up in the component tree, and passing the various pieces down.

Loading data

Let’s see the Books component”

import { FC } from "react";
import { BooksList } from "../components/BooksList";
import { BookEdit } from "../components/BookEditRSC";

export const Books: FC<{ search: string }> = async ({ search }) => {
  const booksResp = await fetch(`http://localhost:3000/api/books?search=${search}`, {
    next: {
      tags: ["books-query"],
    },
  });
  const { books } = await booksResp.json();

  return (
    <div>
      <BooksList books={books} BookEdit={BookEdit} />
    </div>
  );
};

We fetch and await our data right in the component! This was completely impossible before RSC. We then then pass it down into the BooksList component. I separated this out so I could re-use the main BookList component with both versions. The BookEdit prop I’m passing in is a React component that renders the textbox to update the title, and performs the update. This will differ between the RSC, and react-query version. More on that in a bit.

The next property in the fetch is Next-specific, and will be used to invalidate our data in just a moment. The experienced Next devs might spot a problem here, which we’ll get into very soon.

So you’ve loaded data, now what?

We have a page with three different RSCs which load and render data. Now what? If our page was just static content we’d be done. We loaded data, and displayed it. If that’s your use case, you’re done. RSCs are perfect for you, and you won’t need the rest of this post.

But what if you want to let your user interact with, and update your data?

Updating your data with Server Actions

To mutate data with RSC you use something called Server Actions. Check the docs for specifics, but here’s what our server action looks like

"use server";

import { revalidateTag } from "next/cache";

export const saveBook = async (id: number, title: string) => {
  await fetch("http://localhost:3000/api/books/update", {
    method: "POST",
    body: JSON.stringify({
      id,
      title,
    }),
  });
  revalidateTag("books-query");
};

Note the "use server" pragma at the top. That means the function we export is now a server action. saveBook takes an id, and a title; it posts to an endpoint to update our book in SQLite, and then calls revalidateTag with the same tag we passed to our fetch, before.

In real life, we wouldn’t even need the books/update endpoint. We’d just do the work right in the server action. But we’ll be re-using that endpoint in a bit, when we update data without server actions, and it’s nice to keep these code samples clean. The books/update endpoint opens up SQLite, and executes an UPDATE.

Let’s see the BookEdit component we use with RSC:

"use client";

import { FC, useRef, useTransition } from "react";
import { saveBook } from "../serverActions";
import { BookEditProps } from "../types";

export const BookEdit: FC<BookEditProps> = (props) => {
  const { book } = props;
  const titleRef = useRef<HTMLInputElement>(null);
  const [saving, startSaving] = useTransition();

  function doSave() {
    startSaving(async () => {
      await saveBook(book.id, titleRef.current!.value);
    });
  }

  return (
    <div className="flex gap-2">
      <input className="border rounded border-gray-600 p-1" ref={titleRef} defaultValue={book.title} />
      <button className="rounded border border-gray-600 p-1 bg-blue-300" disabled={saving} onClick={doSave}>
        {saving ? "Saving..." : "Save"}
      </button>
    </div>
  );
};

It’s a client component. We import the server action, and then just call it in a button’s event handler, wrapped in a transition so we can have saving state.

Stop and consider just how radical this is, and what React and Next are doing under the covers. All we did was create a vanilla function. We then imported that function, and called it from a button’s event handler. But under the covers a network request is made to an endpoint that’s synthesized for us. And then the revalidateTag tells Next what’s changed, so our RSC can re-run, re-request data, and send down updated markup.

Not only that, but all this happens in one round trip with the server.

This is an incredible engineering achievement, and it works! If you update one of the titles, and click save, you’ll see updated data show up in a moment (the update has an artificial delay since we’re only updating in a local SQLite instance)

What’s the catch?

This seems too good to be true. What’s the catch? Well, let’s see what the terminal shows when we update a book:

Ummmm, why is all of our data re-loading? We only called revalidateTag on our books, not our subjects or tags. The problem is that revalidateTag doesn’t tell Next what to reload, it tells it what to eject from its cache. The fact is, Next needs to reload everything for the current page when we call revalidateTag. This makes sense when you think about what’s really happening. These server components are not stateful; they run on the server, but they don’t live on the server. The request executes on our server, those RSCs render, and send down the markup, and that’s that. The component tree does not live on indefinitely on the server; our servers wouldn’t scale very well if they did!

So how do we solve this? For a use case like this, the solution would be to not turn off caching. We’d lean on Next’s caching mechanisms, whatever they look like when you happen to read this. We’d cache each of these data with different tags, and invalidate the tag related to the data we just updated.

The whole RSC tree will still re-render when we do that, but the requests for cached data would run quickly. Personally, I’m of the view that caching should be a performance tweak you add, as needed; it should not be a sine qua non for avoiding slow updates.

Unfortunately, there’s yet another problem with server actions: they run serially. Only one server action can be in flight at a time; they’ll queue if you try to violate this constraint.

This sounds genuinely unbelievable; but it’s true. If we artificially slow down our update a LOT, and then quickly click 5 different save buttons, we’ll see horrifying things in our network tab. If the extreme slowdown on the update endpoint seems unfair on my part, remember: you should never, ever assume your network will be fast, or even reliable. Occasional, slow network requests are inevitable, and server actions will do the worst possible thing under those circumstances.

This is a known issue, and will presumably be fixed at some point. But the re-loading without caching issue is unavoidable with how Next app directory is designed.

Just to be clear, server actions are still, even with these limitations, outstanding (for some use cases). If you have a web page with a form, and a submit button, server actions are outstanding. None of these limitations will matter (assuming your form doesn’t depend on a bunch of different data sources). In fact, server actions go especially well with forms. You can even set the “action” of a form (in Next) directly to a server action. See the docs for more info, as well as on related hooks, like useFormStatus hook.

But back to our app. We don’t have a page with a single form and no data sources. We have lots of little forms, on a page with lots of data sources. Server actions won’t work well here, so let’s see an alternative.

react-query

React Query is probably the most mature, well-maintained data management library in the React ecosystem. Unsurprisngly, it also works well with RSC.

To use react-query we’ll need to install two packages: npm i @tanstack/react-query @tanstack/react-query-next-experimental. Don’t let the experimental in the name scare you; it’s been out for awhile, and works well.

Next we’ll make a Providers component, and render it from our root layout

"use client";

import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
import { ReactQueryStreamedHydration } from "@tanstack/react-query-next-experimental";
import { FC, PropsWithChildren, useEffect, useState } from "react";

export const Providers: FC<PropsWithChildren<{}>> = ({ children }) => {
  const [queryClient] = useState(() => new QueryClient());

  return (
    <QueryClientProvider client={queryClient}>
      <ReactQueryStreamedHydration>{children}</ReactQueryStreamedHydration>
    </QueryClientProvider>
  );
};

Now we’re ready to go.

Loading data with react-query

The long and short of it is that we use the useSuspenseQuery hook from inside client components. Let’s see some code. Here’s the Books component from the react-query version of our app.

"use client";

import { FC } from "react";
import { useSuspenseQuery } from "@tanstack/react-query";
import { BooksList } from "../components/BooksList";
import { BookEdit } from "../components/BookEditReactQuery";
import { useSearchParams } from "next/navigation";

export const Books: FC<{}> = () => {
  const params = useSearchParams();
  const search = params.get("search") ?? "";

  const { data } = useSuspenseQuery({
    queryKey: ["books-query", search],
    queryFn: async () => {
      const booksResp = await fetch(`http://localhost:3000/api/books?search=${search}`);
      const { books } = await booksResp.json();

      return { books };
    },
  });

  const { books } = data;

  return (
    <div>
      <BooksList books={books} BookEdit={BookEdit} />
    </div>
  );
};

Don’t let the "use client" pragma fool you. This component still renders on the server, and that fetch also happens on the server during the initial load of the page.

As the url changes, the useSearchParams result changes, and a new query is fired off by our useSuspenseQuery hook, from the browser. This would normally suspend the page, but I wrap the call to router.push in startTransition, so the existing content stays on the screen. Check the repo for details.

Updating data with react-query

We already have the /books/update endpoint for updating a book. How do we tell react-query to re-run whichever queries were attached to that data? The answer is the queryClient.invalidateQueries API. Let’s take a look at the BookEdit component for react-query

"use client";

import { FC, useRef, useTransition } from "react";
import { BookEditProps } from "../types";
import { useQueryClient } from "@tanstack/react-query";

export const BookEdit: FC<BookEditProps> = (props) => {
  const { book } = props;
  const titleRef = useRef<HTMLInputElement>(null);
  const queryClient = useQueryClient();
  const [saving, startSaving] = useTransition();

  const saveBook = async (id: number, newTitle: string) => {
    startSaving(async () => {
      await fetch("/api/books/update", {
        method: "POST",
        body: JSON.stringify({
          id,
          title: newTitle,
        }),
      });

      await queryClient.invalidateQueries({ queryKey: ["books-query"] });
    });
  };

  return (
    <div className="flex gap-2">
      <input className="border rounded border-gray-600 p-1" ref={titleRef} defaultValue={book.title} />
      <button className="rounded border border-gray-600 p-1 bg-blue-300" disabled={saving} onClick={() => saveBook(book.id, titleRef.current!.value)}>
        {saving ? "Saving..." : "Save"}
      </button>
    </div>
  );
};

The saveBook function calls out to the same book updating endpoint as before. We then call invalidateQueries with the first part of the query key, books-query. Remember, the actual queryKey we used in our query hook was queryKey: ["books-query", search]. Calling invalidate queries with the first piece of that key will invalidate everything that’s starts with that key, and will immediately re-fire any of those queries which are still on the page. So if you started out with an empty search, then searched for X, then Y, then Z, and updated a book, this code will clear the cache of all those entries, and then immediately re-run the Z query, and update our UI.

And it works.

What’s the catch?

The downside here is that we need two roundtrips from the browser to the server. The first roundtrip updates our book, and when that finishes, we then, from the browser, call invalidateQueries, which causes react-query to send a new network request for the updated data.

This is a surprisingly small price to pay. Remember, with server actions, calling revalidateTag will cause your entire component tree to re-render, which by extension will re-request all their various data. If you don’t have everything cached (on the server) properly, it’s very easy for this single round trip to take longer than the two round trips react-query needs. I say this from experience. I recently helped a friend / founder build a financial dashboard app. I had react-query set up just like this, and also implemented a server action to update a piece of data. And I had the same data rendered, and updated twice: once in an RSC, and again adjacently in a client component from a useSuspenseQuery hook. I basically fired off a race to see which would update first, certain the server action would, but was shocked to see react-query win. I initially thought I’d done something wrong until I realized what was happening (and hastened to roll back my server action work).

Playing on hard mode

There’s one obnoxious imperfection hiding. Let’s find it, and fix it.

Fixing routing when using react-query

Remember, when we search our books, I’m calling router.push which adds a querystring to the url, which causes useSearchParams() to update, which causes react-query to query new data. Let’s look at the network tab when this happens.

before our books endpoint can be called, it looks like we have other things happening. This is the navigation we caused when we called router.push. Next is basically rendering to a new page. The page we’re already on, except with a new querystring. Next is right to assume it needs to do this, but in practice react-query is handling our data. We don’t actually need, or want Next to render this new page; we just want the url to update, so react-query can request new data. If you’re wondering why it navigates to our new, changed page twice, well, so am I. Apparently, the RSC identifier is being changed, but I have no idea why. If anyone does, please reach out to me.

Next has no solutions for this.

The closest Next can come is to let you use window.history.pushState. That will trigger a client-side url update, similar to what used to be called shallow routing in prior versions of Next. This does in fact work; however, it’s not integrated with transitions for some reason. So when this calls, and our useSuspenseQuery hook updates, our current UI will suspend, and our nearest Suspense boundary will show the fallback. This is awful UI. I’ve reported this bug here; hopefully it gets a fix soon.

Next may not have a solution, but react-query does. If you think about it, we already know what query we need to run, we’re just stuck waiting on Next to finish navigating to an unchanging RSC page. What if we could pre-fetch this new endpoint request, so it’s already running for when Next finally finishes rendering our new (unchanged) page. We can, since react-query has an API just for this. Let’s see how.

Let’s look at the react-query search form component. In particular, the part which triggers a new navigation:

startTransition(() => {
  const search = searchParams.get("search") ?? "";
  queryClient.prefetchQuery({
    queryKey: ["books-query", search],
    queryFn: async () => {
      const booksResp = await fetch(`http://localhost:3000/api/books?search=${search}`);
      const { books } = await booksResp.json();

      return { books };
    },
  });

  router.push(currentPath + (queryString ? "?" : "") + queryString);
});

The call to queryClient.prefetchQueryprefetchQuery takes the same options as useSuspenseQuery, and runs that query, now. Later, when Next is finished, and react-query attempts to run the same query, it’s smart enough to see that the request is already in flight, and so just latches onto that active promise, and uses the result.

Here’s our network chart now:

Now nothing is delaying our endpoint request from firing. And since all data loading is happening in react-query, that navigation to our RSC page (or even two navigations) should be very, very fast.

Removing the duplication

If you’re thinking the duplication between the prefetch and the query itself is gross and fragile, you’re right. So just move it to a helper function. In a real app you’d probably move this boilerplate to something like this:

export const makeBooksSearchQuery = (search: string) => {
  return {
    queryKey: ["books-query", search ?? ""],
    queryFn: async () => {
      const booksResp = await fetch(`http://localhost:3000/api/books?search=${search}`);
      const { books } = await booksResp.json();

      return { books };
    },
  };
};

and then use it:

const { data } = useSuspenseQuery(makeBooksSearchQuery(search));

as needed:

queryClient.prefetchQuery(makeBooksSearchQuery(search));

But for this demo I opted for duplication and simplicity.

Before moving on, let’s take a moment and point out that all of this was only necessary because we had data loading tied to the URL. If we just click a button to set client-side state, and trigger a new data request, none of this would ever be an issue. Next would not route anywhere, and our client-side state update would trigger a new react-query.

What about bundle size?

When we did our react-query implementation, we changed our Books component to be a client component by adding the "use client" pragma. If you’re wondering whether that will cause an increase in our bundle size, you’re right. In the RSC version, that component only ever ran on the server. As a client component, it now has to run in both places, which will increase our bundle size a bit.

Honestly, I wouldn’t worry about it, especially for apps like this, with lots of different data sources that are interactive, and updating. This demo only had a single mutation, but it was just that; a demo. If we were to build this app for real, there’d be many mutation points, each with potentially multiple queries in need of invalidation.

If you’re curious, it’s technically possible to get the best of both worlds. You could load data in an RSC, and then pass that data to the regular useQuery hook via the initialData prop. You can check the docs for more info, but I honestly don’t think it’s worth it. You’d now need to define your data loading (the fetch call) in two places, or manually build an isomorphic fetch helper function to share between them. And then with actual data loading happening in RSCs, any navigations back to the same page (ie for querystrings) would re-fire those queries, when in reality react-query is already running those query udates client side. To fix that so you’d have to be certain to only ever use window.history.pushState like we talked about. The useQuery hook doesn’t suspend, so you wouldn’t need transitions for those URL changes. That’s good since pushState won’t suspend your content, but now you have to manually track all your loading states; if you have three pieces of data you want loaded before revealing a UI (like we did above) you’d have to manually track and aggregate those three loading states. It would work, but I highly doubt the complexity would be worth it. Just live with the very marginal bundle size increase.

Just use client components and let react-query remove the complexity with useSuspenseHook.

Wrapping up

This was a long post, but I hope it was a valuable. Next’s app directory is an incredible piece of infrastructure that let’s us request data on the server, render, and even stream component content from that data, all using the single React component model we’re all used to.

There’s some things to get right, but depending on the type of app you’re building, react-query can simplify things a great deal.

🆕 Update

Since publishing this post it was brought to my attention that these fetch calls from the server will not include cookie info. This is by design in Next, unfortunately. Track this issue for updates.

Unfortunately those cookies are needed in practice, for your auth info to be passed to your data requests on the backend. 

The best workaround here would be to read your cookies in the root RSC, and then pass them to the Providers component we already have, for setting up our react-query provider, to be placed onto context. This, by itself, would expose our secure, likely httpOnly cookies into our client bundle, which is bad. Fortunately there’s a library that allows you to encrypt them in a way that they only ever show up on the server.

You’d read these cookie values in all your client components that use useSuspenseQuery, and pass them along in your fetch calls on the server, and on the client, where those values would be empty, do nothing (and rely on your browser’s fetch to send the cookies along) 

]]>
https://frontendmasters.com/blog/combining-react-server-components-with-react-query-for-easy-data-management/feed/ 2 2378
Prefetching When Server Loading Won’t Do https://frontendmasters.com/blog/prefetching-when-server-loading-wont-do/ https://frontendmasters.com/blog/prefetching-when-server-loading-wont-do/#respond Wed, 15 May 2024 23:26:46 +0000 https://frontendmasters.com/blog/?p=2200 This is a post about a boring* topic: loading data.

(* Just kidding it will be amazing and engaging.)

Not how to load data, but instead we’ll take a step back, and look at where to load data. Not in any particular framework, either, this is going to be more broadly about data loading in different web application architectures, and paricularly how that impacts performance.

We’ll start with client-rendered sites and talk about some of the negative performance characteristics they may have. Then we’ll move on to server-rendered apps, and then to the lesser-known out-of-order streaming model. To wrap up, we’ll talk about a surprisingly old, rarely talked about way to effectively load slow data in a server-rendered application. Let’s get started!

Client Rendering

Application metaframeworks like Next and SvelteKit have become incredibly popular. In addition to offering developer conveniences like file system-based routing and scaffolding of API endoints, they also, more importantly, allow you to server render your application.

Why is server rendering so important? Let’s take a look at how the world looks with the opposite: client-rendered web applications, commonly referred to as “single page applications” or SPAs. Let’s start with a simplified diagram of what a typical request for a page looks like in an SPA.

The browser makes a request to your site. Let’s call it yoursite.io. With an SPA, it usually sends down a single, mostly empty HTML page, which has whatever script and style tags needed to run the site. This shell of a page might display your company logo, your static header, your copyright message in the footer, etc. But mostly it exists to load and run JavaScript, which will build the “real” site.

This is why these sites are called “single page” applications. There’s a single web page for the whole app, which runs code on the client to detect URL changes, and request and render whatever new UI is needed.

Back to our diagram. The inital web page was sent back from the web server as HTML. Now what? The browser will parse that HTML and find script tags. These script tags contain our application code, our JavaScript framework, etc. The browser will send requests back to the web server to load these scripts. Once the browser gets them back, it’ll parse, and execute them, and in so doing, begin executing your application code.

At this point whatever client-side router you’re using (i.e. react-routerTanstack Router, etc) will render your current page.

But there’s no data yet!

So you’re probably displaying loading spinners or skeleton screens or the like. To get the data, your client-side code will now make yet another request to your server to fetch whatever data are needed, so you can display your real, finished page to your user. This could be via a plain old fetchreact-query, or whatever. Those details won’t concern us here.

SSR To The Rescue

There is a pretty clear solution here. The server already has has the URL of the request, so instead of only returning that shell page, it could (should) request the data as well, get the page all ready to go, and send down the complete page.

Somehow.

This is how the web always worked with tools like PHP or asp.net. But when your app is written with a client-side JavaScript framework like React or Svelte, it’s surprisingly tricky. These frameworks all have API’s for stringifying a component tree into HTML on the server, so that markup can be sent down to the browser. But if a component in the middle of that component tree needs data, how do you load it on the server, and then somehow inject it where it’s needed? And then have the client acknowledge that data, and not re-request it. And of course, once you solve these problems and render your component tree, with data, on the server, you still need to re-render this component tree on the client, so your client-side code, like event handlers and such, start working.

This act of re-rendering the app client side is called hydration. Once it’s happened, we say that our app is interactive. Getting these things right is one of the main benefits modern application meta-frameworks like Next and SvelteKit provide.

Let’s take a look at what our request looks like in this server-rendered setup:

That’s great. The user sees the full page much, much sooner. Sure, it’s not interactive yet, but if you’re not shipping down obscene amounts of JavaScript, there’s a really good chance hydration will finish before the user can manage to click on any buttons.

We won’t get into all this, but Google themselves tell you this is much better for SEO as well.

So, what’s the catch? Well, what if our data are slow to load. Maybe our database is busy. Maybe it’s a huge request. Maybe there is a network hiccup. Or maybe you just depend on slow services you can’t control. It’s not rare.

This might be worse than the SPA we started with. Even though we needed multiple round trips to the server to get data, at least we were displaying a shell of a page quickly. Here, the initial request to the server will just hang and wait as long as needed for that data to load on the server, before sending down the full page. To the user, their browser (and your page) could appear unresponsive, and they might just give up and go back.

Out of Order Streaming

What if we could have the best of all worlds. What if we could server render, like we saw. But if some data are slow to load, we ship the rest of the page, with the data that we have, and let the server push down the remaining data, when ready. This is called streaming, or more precisely, out-of-order streaming (streaming, without the out-of-order part, is a separate, much more limited thing which we won’t cover here).

Let’s take a hypothetical example where the data abd, and data xyz are slow to load.

With out-of-order streaming we can load the to-do data load on the server, and send the page with just that data down to the user, immediately. The other two pieces of data have not loaded, yet, so our UI will display some manner of loading indicator. When the next piece of data is ready, the server pushes it down:

What’s the catch?

So does this solve all of our problems? Yes, but… only if the framework you’re using supports it. To stream with Next.js app directory you’ll use Suspense components with RSCWith SvelteKit you just return a promise from your loader. Remix supports this too, with an API that’s in the process of changing, so check their docs. SolidStart will also support this, but as of writing that entire project is still in beta, so check its docs when it comes out.

Some frameworks do not support this, like Astro and Next if you’re using the legacy pages directory.

What if we’re using those projects, and we have some dependencies on data which are slow to load? Are we stuck rendering this data in client code, after hydration?

Prefetching to the rescue

The web platform has a feature called prefetching. This lets us add a <link> tag to the <head> section of our HTML page, with a rel="prefetch" attribute, and an href attribute of the URL we want to prefetch. We can put service endpoint calls here, so long as they use the GET verb. If we need to pre-fetch data from an endpoint that uses POST, you’ll need to proxy it through an endpoint that uses GET. It’s worth noting that you can also prefetch with an HTTP header if that’s more convenient; see this post for more information.

When we do this, our page will start pre-fetching our resources as soon as the browser parses the link tag. Since it’s in the <head>, that means it’ll start pre-fetching at the same time our scripts and stylesheets are requested. So we no longer need to wait until our script tags load, parse, and hydrate our app. Now the data we need will start pre-fetching immediately. When hydration does complete, and our application code requests those same endpoints, the browser will be smart enough to serve that data from the prefetch cache.

Let’s see prefetching in action

To see pre-fetching in action, we’ll use Astro. Astro is a wonderful web framework that doesn’t get nearly enough attention. One of the very few things it can’t do is out-of-order streaming (for now). But let’s see how we can improve life with pre-fetching.

The repo for the code I’ll be showing is here. It’s not deployed anywhere, for fear of this blog posting getting popular, and me getting a big bill from Vercel. But the project has no external dependencies, so you can clone, install, and run locally. You could also deploy this to Vercel yourself if you really want to see it in action.

I whipped up a very basic, very ugly web page that hits some endpoints to pull down a hypothetical list of books, and some metadata about the library, which renders the books once ready. It looks like this:

The endpoints return static data, which is why there’s no external dependencies. I added a manual delay of 700ms to these endpoints (sometimes you have slow services and there’s nothing you can do about it), and I also installed and imported some large JavaScript libraries (d3, framer-motion, and recharts) to make sure hydration would take a moment or two, like with most production applications. And since these endpoints are slow, they’re a poor candidate for server fetching.

So let’s request them client-side, see the performance of the page, and then add pre-fetching to see how that improves things.

The client-side fetching looks like this:

useEffect(() => {
  fetch("/api/books")
    .then((resp) => resp.json())
    .then((books) => {
      setBooks(books);
    });

  fetch("/api/books-count")
    .then((resp) => resp.json())
    .then((booksCountResp) => {
      setCount(booksCountResp.count);
    });
}, []);

Nothing fancy. Nothing particularly resilient here. Not even any error handling. But perfect for our purposes.

Network diagram without pre-fetching

Running this project, deployed to Vercel, my network diagram looks like this:

Notice all of the script and style resources, which need to be requested and processed before our client-side fetches start (on the last two lines).

Adding pre-fetching

I’ve added a second page to this project, called with-prefetch, which is the same as the index page. Except now, let’s see how we can add some <link> tags to request these resources sooner.

First, in the root layout, let’s add this in the head section

<slot name="head"></slot>

this gives us the ability to (but does not require us to) add content to our HTML document’s <head>. This is exactly what we need. Now we can make a PrefetchBooks React component:

import type { FC } from "react";

export const PrefetchBooks: FC<{}> = (props) => {
  return (
    <>
      <link rel="prefetch" href="/api/books" as="fetch" />
      <link rel="prefetch" href="/api/books-count" as="fetch" />
    </>
  );
};

Then render it in our prefetching page, like so

<PrefetchBooks slot="head" />

Note the slot attribute on the React component, which tells Astro (not React) where to put this content.

With that, if we run that page, we’ll see our link tags in the head

Now let’s look at our updated network diagram:

Notice our endpoint calls now start immediately, on lines 3 and 4. Then later, in the last two lines, we see the real fetches being executed, at which point they just latch onto the prefetch calls already in flight.

Let’s put some hard numbers on this. When I ran a webpagetest mobile Lighthouse analysis on the version of this page without the pre-fetch, I got the following.

Note the LCP (Largest Contentful Paint) value. That’s essentially telling us when the page looks finished to a user. Remember, the Lighthouse test simulates your site in the slowest mobile device imagineable, which is why it’s 4.6 seconds.

When I re-run the same test on the pre-fetched version, things improved about a second

Definitely much better, but still not good; but it never will be until you can get your backend fast. But with some intelligent, targetted pre-fetching, you can at least improve things.

Parting thoughts

Hopefully all of your back-end data requirements will be forever fast in your developer journeys. But when they’re not, prefetching resources is a useful tool to keep in your toolbelt.

]]>
https://frontendmasters.com/blog/prefetching-when-server-loading-wont-do/feed/ 0 2200
Using Auth.js with SvelteKit https://frontendmasters.com/blog/using-nextauth-now-auth-js-with-sveltekit/ https://frontendmasters.com/blog/using-nextauth-now-auth-js-with-sveltekit/#respond Mon, 29 Apr 2024 15:22:25 +0000 https://frontendmasters.com/blog/?p=1764 SvelteKit is an exciting framework for shipping performant web applications with Svelte. I’ve previously written an introduction on it, as well as a deeper dive on data handling and caching.

In this post, we’ll see how to integrate Auth.js (Previously next-auth) into a SvelteKit app. It might seem surprising to hear that this works with SvelteKit, but this project has gotten popular enough that much of it has been split into a framework-agnostic package of @auth/core. The Auth.js name is actually a somewhat recent rebranding of NextAuth.

In this post we’ll cover the basic config for @auth/core: we’ll add a Google Provider and configure our sessions to persist in DynamoDB.

The code for everything is here in a GitHub repo, but you won’t be able to run it without setting up your own Google Application credentials, as well as a Dynamo table (which we’ll get into).

The initial setup

We’ll build the absolute minimum skeleton app needed to demonstrate authentication. We’ll have our root layout read whether the user is logged in, and show a link to content that’s limited to logged in users, and a log out button if so; or a log in button if not. We’ll also set up an auth check with redirect in the logged in content, in case the user browses directly to the logged in URL while logged out.

Let’s create a SvelteKit project if we don’t have one already, using the insutructions here. Chose “Skeleton Project” when prompted.

Now let’s install some packages we’ll be using:

npm i @auth/core @auth/sveltekit

Let’s create a top-level layout that will use our auth data. First, our server loader, in a file named +layout.server.ts. This will hold our logged-in state, which for now is always false.

export const load = async ({ locals }) => {
  return {
    loggedIn: false,
  };
};

Now let’s make the actual layout, in +layout.svelte with some basic markup

<script lang="ts">
  import type { PageData } from './$types';
  import { signIn, signOut } from '@auth/sveltekit/client';

  export let data: PageData;
  $: loggedIn = data.loggedIn;
</script>

<main>
  <h1>Hello there! This is the shared layout.</h1>

  {#if loggedIn}
    <div>Welcome!</div>
    <a href="/logged-in">Go to logged in area</a>
    <br />
    <br />
    <button on:click={() => signOut()}>Log Out</button>
  {:else}
    <button on:click={() => signIn('google')}>Log in</button>
  {/if}
  <section>
    <slot />
  </section>
</main>

There should be a root +page.svelte file that was generated when you scaffolded the project, with something like this in there

<h1>This is the home page</h1>
<p>Visit <a href="https://kit.svelte.dev">kit.svelte.dev</a> to read the SvelteKit docs</p>

Feel free to just leave it.

Next, we’ll create a route called logged-in. Create a folder in routes called logged-in and create a +page.server.ts which for now will always just redirect you out.

import { redirect } from "@sveltejs/kit";

export const load = async ({}) => {
  redirect(302, "/");
};

Now let’s create the page itself, in +page.svelte and add some markup

<h3>This is the logged in page</h3>

And that’s about it. Check out the GitHub repo to see everything, including just a handful of additional styles.

Adding Auth

Let’s get started with the actual authentication.

First, create an environment variable in your .env file called AUTH_SECRET and set it to a random string that’s at least 32 characters. If you’re looking to deploy this to a host like Vercel or Netlify, be sure to add your environment variable in your project’s settings according to how that host does things.

Next, create a hooks.server.ts (or .js) file directly under src. The docs for this file are here, but it essentially allows you to add application-wide wide side effects. Authentication falls under this, which is why we configure it here.

Now let’s start integrating auth. We’ll start with a very basic config:

import { SvelteKitAuth } from "@auth/sveltekit";
import { AUTH_SECRET } from "$env/static/private";

const auth = SvelteKitAuth({
  providers: [],
  session: {
    maxAge: 60 * 60 * 24 * 365,
    strategy: "jwt",
  },

  secret: AUTH_SECRET,
});

export const handle = auth.handle;

We tell auth to store our authentication info in a JWT token, and configure a max age for the session as 1 year. We provide our secret, and a (currently empty) array of providers.

Adding our provider

Providers are what perform the actual authentication of a user. There’s a very, very long list of options to choose from, which are listed here. We’ll use Google. First, we’ll need to create application credentials. So head on over to the Google Developers console. Click on credentials, and then “Create Credentials”

Click it, then choose “OAuth Client ID.” Choose web application, and give your app a name.

For now, leave the other options empty, and click Create.

Screenshot

Before closing that modal, grab the client id, and client secret values, and paste them into environment variables for your app

GOOGLE_AUTH_CLIENT_ID=....
GOOGLE_AUTH_SECRET=....

Now let’s go back into our hooks.server.ts file, and import our new environment variables:

import { AUTH_SECRET, GOOGLE_AUTH_CLIENT_ID, GOOGLE_AUTH_SECRET } from "$env/static/private";

and then add our provider

providers: [
  GoogleProvider({
    clientId: GOOGLE_AUTH_CLIENT_ID,
    clientSecret: GOOGLE_AUTH_SECRET
  })
],

and then export our auth handler as our hooks handler.

export const handle = auth.handle;

Note that if you had other handlers you wanted SvelteKit to run, you can use the sequence helper:

import { sequence } from "@sveltejs/kit/hooks";

export const handle = sequence(otherHandleFn, auth.handle);

Unfortunately if we try to login now, we’re greeted by an error:

Clicking error details provides some more info:

We need to tell Google that this redirect URL is in fact valid. Go back to our Google Developer Console, open the credentials we just created, and add this URL in the redirect urls section.

And now, after saving (and possibly waiting a few seconds) we can click login, and see a list of our Google accounts available, and pick the one we want to log in with

Choosing one of the accounts should log you in, and bring you right back to the same page you were just looking at.

So you’ve successfully logged in, now what?

Being logged in is by itself useless without some way to check logged in state, in order to change content and grant access accordingly. Let’s go back to our layout’s server loader

export const load = async ({ locals }) => {
  return {
    loggedIn: false,
  };
};

We previously pulled in that locals property. Auth.js adds a getSession method to this, which allows us to grab the current authentication, if any. We just logged in, so let’s grab the session and see what’s there

export const load = async ({ locals }) => {
  const session = await locals.getSession();
  console.log({ session });

  return {
    loggedIn: false,
  };
};

For me, this logs the following:

All we need right now is a simple boolean indicating whether the user is logged in, so let’s send down a boolean on whether the user object exists:

export const load = async ({ locals }) => {
  const session = await locals.getSession();
  const loggedIn = !!session?.user;

  return {
    loggedIn,
  };
};

and just like that, our page updates:

The link to our logged-in page still doesn’t work, since it’s still always redirecting. We could run the same code we did before, and call locals.getSession to see if the user is logged in. But we already did that, and stored the loggedIn property in our layout’s loader. This makes it available to any routes underneath. So let’s grab it, and conditionally redirect based on its value.

import { redirect } from "@sveltejs/kit";

export const load = async ({ parent }) => {
  const parentData = await parent();

  if (!parentData.loggedIn) {
    redirect(302, "/");
  }
};

And now our logged-in page works:

Persisting our authentication

We could end this post here. Our authentication works, and we integrated it into application state. Sure, there’s a myriad of other auth providers (GitHub, Facebook, etc), but those are just variations on the same theme.

But one topic we haven’t discussed is authentication persistence. Right now our entire session is stored in a JWT, on our user’s machine. This is convenient, but it does offer some downsides, namely that this data could be stolen. An alternative is to persist our users’ sessions in an external database. This post discusses the various tradeoffs, but most of the downsides of stateful (i.e. stored in a database) solutions are complexity and the burden of having to reach out to an external storage to grab session info. Fortunately, Auth.js removes the complexity burden for us. As far as performance concerns, we can choose a storage mechanism that’s known for being fast and effective: in our case we’ll look at DynamoDB.

Adapters

The mechanism by which Auth.js persists our authentication sessions is database adapters. As before, there are many to choose from. We’ll use DynamoDB. Compared to providers, the setup for database adapters is a bit more involved, and a bit more tedious. In order to keep the focus of this post on Auth.js, we won’t walk through setting up each and every key field, TTL setting, and GSI—to say nothing of AWS credentials if you don’t have them already. If you’ve never used Dynamo and are curious, I wrote an introduction here. If you’re not really interested in Dynamo, this section will show you the basics of setting up database adapters, which you can apply to any of the (many) others you might prefer to use.

That said, if you’re interested in implementing this yourself, the adapter docs provide CDK and CloudFormation templates for the Dynamo table you need, or if you want a low-dev-ops solution, it even lists out the keys, TTL and GSI structure here, which is pretty painless to just set up.

We’ll assume you’ve got your DynamoDB instance set up, and look at the code to connect it. First, we’ll install some new libraries

npm i @auth/dynamodb-adapter @aws-sdk/lib-dynamodb @aws-sdk/client-dynamodb

First, make sure your dynamo table name, as well as your AWS credentials are in environment variables

Now we’ll go back to our hooks.server.ts file, and whip up some boilerplate (which, to be honest, is mostly copied right from the docs).

import { GOOGLE_AUTH_CLIENT_ID, GOOGLE_AUTH_SECRET, AMAZON_ACCESS_KEY, AMAZON_SECRET_KEY, DYNAMO_AUTH_TABLE, AUTH_SECRET } from "$env/static/private";

import { DynamoDB, type DynamoDBClientConfig } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocument } from "@aws-sdk/lib-dynamodb";
import { DynamoDBAdapter } from "@next-auth/dynamodb-adapter";
import type { Adapter } from "@auth/core/adapters";

const dynamoConfig: DynamoDBClientConfig = {
  credentials: {
    accessKeyId: AMAZON_ACCESS_KEY,
    secretAccessKey: AMAZON_SECRET_KEY,
  },

  region: "us-east-1",
};

const client = DynamoDBDocument.from(new DynamoDB(dynamoConfig), {
  marshallOptions: {
    convertEmptyValues: true,
    removeUndefinedValues: true,
    convertClassInstanceToMap: true,
  },
});

and now we add our adapter to our auth config:

  adapter: DynamoDBAdapter(client, { tableName: DYNAMO_AUTH_TABLE }),

and now, after logging out, and logging back in, we should see some entries in our DynamoDB instance

Authentication hooks

The auth-core package provides a number of callbacks you can hook into, if you need to do some custom processing.

The signIn callback is invoked, predictably, after a successful login. It’s passed an account object from whatever provider was used, Google in our case. One use case with this callback could be to optionally look up, and sync legacy user metadata you might have stored for your users before switching over to OUath authentication with established providers.

async signIn({ account }) {
  const userSync = await getLegacyUserInfo(account.providerAccountId);
  if (userSync) {
    account.syncdId = userSync.sk;
  }

  return true;
},

The jwt callback gives you the ability to store additional info in the authentication token (you can use this regardless of whether you’re using a database adapter). It’s passed the (possibly mutated) account object from the signIn callback.

async jwt({ token, account }) {
  token.userId ??= account?.syncdId || account?.providerAccountId;
  if (account?.syncdId) {
    token.legacySync = true;
  }
  return token;
}

We’re setting a single userId onto our token that’s either the syndId we just looked up, or the providerAccountId already attached to the provider account. If you’re curious about the ??= operator, that’s the nullish coalescing assignment operator.

Lastly, the session callback gives you an opportunity to shape the session object that’s returned when your application code calls locals.getSession()

async session({ session, user, token }: any) {
  session.userId = token.userId;
  if (token.legacySync) {
    session.legacySync = true;
  }
  return session;
}

now our code could look for the legacySync property, to discern that a given login has already sync’d with a legacy account, and therefore know not to ever prompt the user about this.

Extending the types

Let’s say we do extand the default session type, like we did above. Let’s see how we can tell TypeScript about the things we’re adding. Basically, we need to use a TypeScript feature called interface merging. We essentially re-declare an interface that already exists, add stuff, and then TypeScript does the grunt work of merging (hence the name) the original type along with the changes we’ve made.

Let’s see it in action. Go to the app.d.ts file SvelteKit adds to the root src folder, and add this

declare module "@auth/core/types" {
  interface Session {
    userId: string;
    provider: string;
    legacySync: boolean;
  }
}

export {};

We have to put the interface in the right module, and then we add what we need to add.

Note the odd export {}; at the end. There has to be at least one ESM import or export, so TypeScript treats the file correctly. SvelteKit by default adds this, but make sure it’s present in your final product.

Wrapping up

We’ve covered a broad range of topics in this post. We’ve seen how to set up Auth.js in a SvelteKit project using the @auth/core library. We saw how to set up providers, adapters, and then took a look at various callbacks that allow us to customize our authentication flows.

Best of all, the tools we saw will work with SvelteKit or Next, so if you’re already an experienced Next user, a lot of this was probably familiar. If not, much of what you saw will be portable to Next if you ever find yourself using that.

]]>
https://frontendmasters.com/blog/using-nextauth-now-auth-js-with-sveltekit/feed/ 0 1764