We don’t have to sacrifice the awesomeness of GROQ for the safety of TypeScript. With Zod, we can have both.
View in Sanity StudioUpdated 2022-10-03 | Published 2022-09-24
Content edited in the Sanity Studio is defined by schema files. However, the Sanity Content Lake is schemaless. This lends impressive flexibility to authoring and querying data. Especially when using GROQ – which can arbitrarily modify data during the time of query. Meaning your output is not always the same shape as your input.
All this flexibility forgoes type-safety, and in this increasingly All-TypeScript-Everything world – that’s becoming less palatable. Luckily it's solvable!
In this blog we’ll explore Zod – which can generate Types for our Sanity data during development as well as enforcing validation at run time.
This blog assumes you’re getting started with TypeScript (like me!) but have experience working with Sanity Schema and GROQ.
You could.
A popular method of Type Safety pursued by many is to generate Types from Sanity Schema files. There’s a few community packages around to perform this.
Even using these you may still need to manually create or extend Types as your GROQ query may be creating new shapes of data not represented in your schema.
For example:
// In your Sanity Schema{ name: 'person', type: 'document', fields: [ {name: 'firstName', type: 'string'}, {name: 'lastName', type: 'string'}, ]}
// Example generated typestype Person = { _id: string _type: 'person' firstName?: string | null lastName?: string | null}
// But GROQ is much more flexible than just the schema!const query = `*[_type == "person"][0]{ "fullName": array::join([firstName, lastName], ' ') }`
// So if we get that dataconst person: Person = await client.fetch(query)
// It returns what we asked for, not what the Studio Schema says// {fullName: "Terrence Howard"}
// Error: `fullName` does not exist on type `Person`console.log(person.fullName)
If you primarily want type safety bound to schema – you might prefer using Sanity’s GraphQL as it requires deployment from your schema – from which you can Code Gen Types.
And what happens when you don't have access to schema files? On the home page of this blog I query the Community Sanity project to list Guides I've written – ideally we can generate Types for data even if it's authored from Schema we don't have.
Without automated Type generation, you’re likely writing your own Types.
When you do this, your code might look something like the below. We have a Type and a Query – but nothing to ensure the two are actually in sync either in development or at run time.
// This is a hand-tooled, artisanal Typetype Article = { title: string | null, slug?: { current: string | null }}
// And it matches my query, todayconst query = `*[_type == "article"]{ title, slug}`
// So this type is correct!const articles: Article[] = await client.fetch(query)
// ...for now
This runs the risk of getting out of sync very quickly.
You’re telling your project what shape the type of articles
probably is. But you have no idea if it has more or fewer keys – or if the values in any of them are actually the correct type.
If we add another field to our GROQ query, our type is no longer up to date, and we’d only get warned during development. We could modify the Type, and that won’t modify the Query.
We don’t have to sacrifice the awesomeness of GROQ for the safety of TypeScript.
whynotboth.gif
With Zod we’ll replace the work of writing a Type with writing a validator for the returned Data, and get a Type for free.
New to Zod? Stop reading and spend 30 minutes completing these exercises to get acquainted:
Here’s the same query as above, validated with Zod.
import {z} from 'zod'
export const Slug = z .object({ current: z.string().nullable(), }) .nullable()
export const articleZ = z.object({ title: z.string().nullable(), slug: Slug,})
export const Articles = z.array(articleZ)
const query = `*[_type == "article"]{ title, slug}`
const articles = await client.fetch(query).then(result => articleZ.parse(result))
// automatically created type for `articles`
// type Articles = {// title: string | null,// slug?: {// current: string | null// }// }[]
Our Type Articles
is automatically generated now by the .parse()
function. Which is great all on its own.
What’s even better is the validation taking place. If additional fields were added to our query – they would be stripped from the data, until we add them to the validator. This feedback loop of needing to both modify the validator and the query ensures we always have the most accurate Type. Even at runtime.
Also, if a field was removed from the query, the validator will error because it has received undefined
instead of null
.
In the above examples we are specifically defining fields by name, not using the ...
spread operator to return all fields. While you could add passthrough to avoid this – that’s the sort of fast-and-loose thinking that got us in this mess in the first place.
Also, you’re better to resolve specific fields to keep the data returned from queries as small (and therefore fast) as possible.
// Query for every fieldconst query = `*[_type == "article"]{ ...}`
// Allow every field to pass through validationconst articles = await client.fetch(query).then(result => articleZ.passthrough().parse(result))
// Now we're back where we started, unsure what data we have!// Our GROQ is "simpler", but at what cost?
I'm using @portabletext/react on this website ... and every other React project that uses Portable Text content.
While we can parse our content from Sanity to be a specific Type, any components we use from libraries will expect Types of their own to be compatible.
It appears that right now, this is the solution I could find for creating a parser to satisfy an existing Type.
The value
prop of <PortableText />
must be an array of TypedObject
's
// This function takes in a type, and returns a type to Zodconst schemaForType = <T>() => <S extends z.ZodType<T, any, any>>(arg: S) => { return arg } // This is the shape of the "TypedObject" Type from Portable Textconst baseTypedObjectZ = z .object({ _type: z.string(), _key: z.string(), }) .passthrough()
// Here we use the helper function to wrap our Zod objectexport const typedObjectZ = schemaForType<TypedObject>()(typedObjectBlockZ)
Notice the .passthrough()
method. These objects could contain any extra data and at the point we're first querying the document, we remove any additional data.
So now our parser looks like this, when we query for a Portable Text field named content
.
export const articleZ = z.object({ _id: z.string(), title: z.string().nullable(), content: z.array(typedObjectZ).nullable(),})
// Now content will satisfy this type when using the component<PortableText value={content} />
However, all we know in TypedObject is that it is an object with _type
and _key
. This doesn't give us safety of what data is within each unique block type. Especially for custom objects.
Now we'll make it the responsibility of each individual component
to strictly parse the block value that it has been passed.
Let's use the Code Input block as an example:
// This uses the .extend() method to:// 1. add extra keys to typedObject// 2. override the _type key to a literal// 3. parse the whole object in its componentexport const typedObjectCodeZ = baseTypedObjectZ.extend({ _type: z.literal('code'), code: z.string().optional(), language: z.string().optional(),})
export type TypedObjectCode = z.infer<typeof typedObjectCodeZ>
// Now we have our parsers, here's the component to render itimport type {PortableTextTypeComponentProps} from '@portabletext/react'
export default function TypeCode(props: PortableTextTypeComponentProps<TypedObjectCode>) { const value = React.useMemo(() => typedObjectCodeZ.parse(props.value) , [props.value])
return <Prism code={value.code} language={value.language} />}
// Finally put it all together!// Here's the components being passed into <PortableText />export const components = { ..., // other blocks types: { code: TypeCode }}
With the above our Portable Text blocks have been:
.passthrough()
<PortableText />
component value
with the correct TypedObject
Type_type
When images are uploaded to Sanity they are given a unique id
which contain information about the size and format of the image. This also means that if we know the projectId
and dataset
of where the image is stored, we can dynamically generate a URL to the image.
To do this, we use the @sanity/image-url library to create a helper function urlFor
:
export const urlFor = (source: SanityImageSource) => imageUrlBuilder(projectDetails()).image(source)
(The projectDetails
helper function here returns the projectId
and dataset
)
The SanityImageSource
Type will accept a string – the _id
of the image – but I want to also use the crop and hotspot details of the image so we need to query for those.
For each image I'm querying with GROQ like this:
image { crop, hotspot, asset->{ _id, _type, altText, description, metadata { blurHash }, }}
altText
and description
are fields from the excellent Media Browser plugin.
This is all the data I need, but there's a disconnect between what we get returned and what the urlFor helper function accepts. If crop and/or hotspot don't exist, they'll return null
– where SanityImageSource
needs them undefined
.
Fortunately Zod can mutate values during parsing!
Like working with Portable Text above, to satisfy this SanityImageSource
type, we'll create a Zod parser that is run through the schemaForType
helper function.
The code for this is a bit long to show here, take a look at it in the repo for this project.
You'll see I have individual parsers for crop and hotspot as they have their own types as well. The solution to our null
/ undefined
error is contained in this parser:
// These partials are all in the same file// https://github.com/SimeonGriggs/simeonGriggs/blob/main/app/types/image.tsexport const sanityImageObjectExtendedZ = z.object({ asset: sanityImageZ, // GROQ may return null for these // But our type requires them to be undefined if they don't exist crop: sanityImageCropZ.nullable().transform((v) => v ?? undefined), hotspot: sanityImageHotspotZ.nullable().transform((v) => v ?? undefined),})
If the crop or hotspot exist, they are used, otherwise null is converted to undefined. And so now any image data we query in GROQ can be run through this parser and then used by the urlFor
helper function.
At the time of writing this blog post I'm relatively new to TypeScript, so maybe I'll look back eventually at this post and cringe. Maybe you're cringing already?
While I understood in broad terms the point of TypeScript types, it didn't really make sense to me to "pretend" to know what shape data is.
Especially when the Sanity Content Lake is schema-less, and when we talk about "Schema" with Sanity we're only declaring what's editable within the Studio app. It's not intended to be a promise of the shape of the data that could be returned.
Zod allows us to parse the shape of the data in the same place it is queried. When we use GROQ, the query could be determining the shape of the data. To me it presents the most logical place to both validate and create Types when working with Sanity data.