diff --git a/content/2026/04/Building-a-High-Performance-Blog-from-Scratch-with-Bun.md b/content/2026/04/Building-a-High-Performance-Blog-from-Scratch-with-Bun.md new file mode 100644 index 0000000..fcafd92 --- /dev/null +++ b/content/2026/04/Building-a-High-Performance-Blog-from-Scratch-with-Bun.md @@ -0,0 +1,250 @@ +--- +title: "Building a High-Performance Blog from Scratch with Bun" +date: 2026-4-28 +tags: [Bun, Progressive Enhancement, Performance, React, Typescript, Web App Architecture, AppShell] +excerpt: "A little about building my blog with Bun and achieving less than 40KB page loads by implementing an App Shell architecture that progressively enhances from an MPA into an SPA." +draft: false +--- + +## Made From Scratch + +While it was tempting to deploy something like [Ghost](https://ghost.org/) for my blogging it didn't feel right. As a software engineer, shouldn't my personal site where I share my own thoughts and experiences be something that I built myself? Shouldn't I know what every line of code does and be able to speak to it? + +So I built my blog from scratch with no frameworks or CMS (ish) but that doesn't mean modern technologies were off the table. As a big fan of [Bun](https://bun.com/), it only made sense that I would use it here. It’s performant, has native Typescript support, and an all-in-one toolbox approach. Initially I was going to use [Elysia](https://elysiajs.com/) as well, but while I still love it and will probably give it it's own blog post, it wasn't the right fit for my blog as much as [Bun's built-in Fullstack Server](https://bun.com/docs/bundler/fullstack) was. + +With Bun being an all-in-one toolkit, I only have three dependencies in my package.json +- **React 19** for server-side rendering +- **gray-matter** for parsing frontmatter from blog posts +- **marked** for rendering markdown into html + +This combination gives me modern developer tooling that achieves an incredibly performant blog. At it's core, my blog is a static-site generator following the [App Shell Model](https://developer.chrome.com/blog/app-shell) (essentially the precursor to modern island architectures) and progressively enhances into a single-page app should Javascript be allowed to execute. This achieves an initial page load of *less than 40kb over the wire* using gzip compression and even smaller on subsequent navigations. The other benefit of a progressive enhancement approach is that functionality, like filtering by tags, works in JS restricted environments as a multi-page approach and then elevates to client side handling, avoiding the loading cost of the app-shell. + +## The App Shell + +If you're not familiar with the concept of an App Shell, the short of it is that a website or application has many components that are static and persist across pages called the 'Shell' and once it's loaded, we can then fetch just the content to fill in the 'shell'. Things like navigation bars or footers are often good candidates for being part of the Shell. In a traditional Multi-page-app (MPA) we would be refetching this, often static, content every time we navigate to a new page because we are loading everything on each click. To get around this cost, developers started creating 'Single Page Applications' that are more commonly known as SPAs. These web apps load all the resources upfront and then fetch just the content they need. This means on the first load, we're sending a bunch of things that aren't needed such as the layouts of pages that may never be clicked on, but with the advantage that subsequent pages feel instant because they only need to fetch a (relatively) small amount of data. + +Modern web apps usually leverage some kind of server-side rendering (ssr) to improve the initial load performance. My Blog was built with the idea that I wanted my entire site to act like an MPA, and if Javascript is available, it would enhance the user experience by upgrading it to an SPA and load just the content. That's one of the reasons the frontend is so simple, it is just a blog after all. + +To achieve this, there are two pieces. The first is the server side implementation and the second is the client side enrichment. + +The server is responsible for serving content in two formats, either with or without the AppShell. By default, the server takes the requested content and wraps it in the AppShell before returning it. This is a 'full page load'. + +```jsx +return compressResponse( + renderToString( + + + , + ) +); +``` + +The `renderToString` is a React function to take my JSX components and translate them into browser understandable html. The `compressResponse` function is just a simple utility to gzip a response for sending less data over the network. The key thing to notice is that the AppShell simply wraps the content. This means it's trivial to send content without the AppShell. + +```jsx +// AppShell is already loaded, just send the
content +if (req.headers.get("shell-loaded") === "true") { + return compressResponse( + renderToString( + + ) + ); +} +``` + +By following semantic html, it's a natural separation to use the `
` element and replace it on the client as needed. Included in the AppShell is the onLoad.ts file that gets transpiled into JS and inserted into the AppShell markup. + +This script adds special link handling for all the links that take us somewhere else in the blog, we add an onclick handler that stops the default browser behavior of fetching the page and instead calls a custom loadContent function. + +```ts +links.forEach(link => { + //Internal Link, use history api + if (link.host == window.location.host) { + link.onclick = async (e) => { + console.log('clicked', link.href); + e.preventDefault(); + e.stopPropagation(); + e.stopImmediatePropagation(); + window.history.pushState({}, '', link.href); + await loadContent(link.href, true); + } + } +``` + +`loadContent` fetches the content using JS with a *shell-loaded* header included. This header tells the server that we received a request in which the AppShell was loaded, instructing it to give us just the content, and not the full page with the AppShell included. We then take the response and replace the main element with the new content from the server. + +```ts +const response = await fetch(url, { + headers: { + 'shell-loaded': 'true' + } +}); +const html = await response.text(); +const mainElement = document.querySelector('main'); +if (mainElement) { + mainElement.outerHTML = html; + attachLinkHandlers(); +} +``` + +We also run the attachLinkHandlers once again to cover any links that are included in the new page to get the enriched behavior and not fall back to a traditional page load. + +## Build-Time Magic: Bun Plugins + +While there's no web editor for editing blog posts, they have to live somewhere. In my case, they live in the [repo itself](https://git.cbraaten.dev/Caleb/Blog) as markdown files in a *content directory*. This gives me substantial freedom in moving to a different solution easily in the future or using different text editors to write my posts. + +Today, most of the time when we reach for a database it's so that we can persist data separate from the lifecycle of our application code execution. In my case, I don't actually care about the data in it because my content is saved in the markdown files. Using Bun's built in [sqlite driver](https://bun.com/docs/runtime/sqlite) I'm able to easily embed all the published posts into my deployment artifact as well as store the metadata for the posts. + +Not only does this allow me to deliver on the AppShell model and progressive enhancement of my blog from a MPA into an SPA, this also unlocks the ability for me to easily implement features like pagination for blog posts, an archive to see all of them, being able to show a next and previous button to navigate between posts chronologically, and finally (and most importantly) support filters based on tags. + +The beauty of this approach is that the database gets regenerated during the build process from the source markdown files. This means my deployment artifact is completely self-contained, I don't need to configure external database connections, manage database migrations at runtime, or worry about data consistency during deploys. The content is the authoritative source of truth, and the database is simply a convenient in-memory representation optimized for querying. + +In order to load these markdown files into my blog I leverage Bun's plugin system. My blog, at the time of this writing, has three plugins. Plugins are implemented by writing some handler code and including the plugin in a `bunfig.toml` file. + +```toml +preload = [ + "./bun_plugins/onStartup-post-importer.ts" +] + +[serve.static] +plugins = [ + "./bun_plugins/onImport-markdown-loader.tsx", + "./bun_plugins/onRuntime-external-urls.ts" +] +``` + +The *preload* key executes the plugins prior to any user space code executing. This allows me to construct the environment that my userspace code expects is a given and I can bifurcate the build process of my markdown files being transformed into html from the runtime code that serves the final product. In my implementation there are three different *contexts* that my 'onStartup-post-importer' could be run within. + +First up, if it's production, we assume that the database has already been constructed during the build process and therefore do not have the content directory, but do have a sqlite database that the pre-rendered posts are saved within. This is a simple guard clause that returns early if the environment is production. + +If it's not production, we therefore know that the *content directory* exists. In both the **build** and **development** contexts we want to parse the files because we are constructing the database that the application serves. If we are in the **build** context though, we don't need to include draft files so we skip adding those to the database because they aren't yet ready to publish. + +```ts +if (process.env.NODE_ENV === 'production') return; + +const glob = new Bun.Glob("**/*.md"); +for await (const file of glob.scan("./content")) { + + // Omit draft blog posts from database initialization in production + if (process.env.NODE_ENV === 'build' && data.draft) { + continue; + } + + // Add parsed markdown to database + dbConnection.addPost(route, data, bodyHtml); +} +``` +View [full file](https://git.cbraaten.dev/Caleb/Blog/src/branch/main/bun_plugins/onStartup-post-importer.ts) + +## The Challenges Navigating HMR + +Initially I wanted to use [Bun's built in Hot reloading](https://bun.com/docs/bundler/hot-reloading) but I ran into a problem. The first plugin I wrote was the `onImport-markdown-loader`. This is perhaps *THE COOLEST PART OF BUN'S PLUGIN SYSTEM*. I am able to import the markdown files in my Typescript code and Bun will interject and run my own code allowing me to transform what is actually returned by the import. This plugin removes the metadata, transforms the markdown into html, wraps the blog post in the app shell, returns an html glob + +```tsx +const markdownLoader: BunPlugin = { + name: 'markdown-loader', + setup(build) { + build.onLoad({filter: /\.md$/}, async args => { + // Psudo + file = Bun.file(args.path) + {meta, bodyHtml} = parse(file) + + // Wrap content with App Shell + const renderedHtml = renderToString( + + + + ); + + // Return to importer + return { + contents: renderedHtml, + loader: 'html', + }; + }); + }, +}; +``` +View [full file](https://git.cbraaten.dev/Caleb/Blog/src/branch/main/bun_plugins/onImport-markdown-loader.tsx) + + +Inside `index.ts` I have a dynamic import that crawls the *content directory* the same way that the `onStartup-post-importer` does and then attempts to import each markdown file. Once all the files have been 'imported' they are saved into an object and destructured into the Bun server registering each endpoint. + +```tsx +Bun.serve({ + development: { + hmr: true, + console: true, + }, + + routes: { + // Destructure the blogPosts object + ...(await blogPosts()), + }, +}); +``` + +This is great because all files that are imported are automatically recorded in Bun's file watcher and will be reloaded on any changes. This means I can write my blog posts and see them re-rendered in the browser as they would be presented just like I was developing with React. + +For as cool as this was, it unfortunately introduced two problems. The first was that this would prevent me from implementing my App Shell approach because the Bun server would register the full page as a route and I wasn't able to hook in and serve a version with just the content using a header. It was all or nothing so I duplicated the mounting of blog posts, one as the standard mount method used in production and one with HMR support / file watching when running in development. + +```tsx +Bun.serve({ + development: { + hmr: true, + console: true, + }, + + routes: { + // standard mounting of blog posts + ...(await blogPosts(false)), + + // hot module replacement in development mode + ...(process.env.NODE_ENV === "development" ? (await blogPosts(true)) : {}), + }, +}); +``` + +The other issue was that Bun would try to bundle all the assets it saw referenced in the import. This is a great feature, but when the `onStartup-post-importer` plugin would import the markdown, it wrapped it in the AppShell. This AppShell included a `profile-badge` component that used a static image for my profile picture. Bun would then try to resolve this image and bundle it into a dynamically named resource. That same component in production would still have the original file name resulting in a broken image. + +This put me in a catch 22. Either I use Bun's HMR and forgo the App Shell model due to not being able to resolve the profile picture or I give up HMR and maintain the AppShell approach. Instead, I went with hacking the bundler through another plugin, `onRuntime-external-urls`. This plugin intercepts resolving assets and if the resource is included in the ignore list, we tell the bundler to move on. This allows me to continue using Bun's HMR system for viewing blog posts as I edit them just with a broken link in the `profile-badge` which, for development, isn't a big deal. + +```tsx +const runtimeExternalUrlsPlugin: BunPlugin = { + name: 'runtime-external-urls', + setup(build) { + // Intercept resolution of paths that start with / + build.onResolve({ filter: /^\// }, args => { + // Mark specific runtime URLs as external to prevent bundler analysis + const externalPatterns = [ + '/profile-picture.webp', + ]; + + if (externalPatterns.some(pattern => args.path.includes(pattern))) { + return { + path: args.path, + external: true, // This tells the bundler NOT to bundle this + }; + } + + return undefined; // Let normal resolution continue + }); + }, +}; +``` + +## Looking Forward To Sharing More + +While it certainly would have been a lot easier and substantially faster to deploy an already built solution, it didn't feel right. And while there was a desire to use an existing framework like Astro, I wanted a blog that was performant because I personally made it that way. One core requirement that I had was that my site would be less that 128kb whenever you loaded any page. Not only did I meet that requirement, I surpassed that goal by being less than 40kb, including my profile picture! + +It's worth calling out, that this post highlights the success I had building my blog but in reality there were a lot of challenges. I went down several dead ends, went back and forth on what web server I was going to use, and ran into countless bugs. I wanted to give up on it dozens of times and even today I can still see all kinds of issues and architectural problems that I don't like. But I'm learning to let go and ship. Not only does it work, it works really well, and there is no need to let perfection get in the way of starting to share my thoughts. + +And while yes, this site may appear to be a simple blog, there's actually a lot that went into achieving an AppShell model that starts the application as an MPA and then progressively enhances into an SPA. Now, every time I visit my blog, I can have the satisfaction of watching how fast it loads and know that I built it from scratch. + +> If you're interested, you can view the [full source code here](https://git.cbraaten.dev/Caleb/Blog) diff --git a/content/2026/04/hello-world.md b/content/2026/04/hello-world.md deleted file mode 100644 index 337af98..0000000 --- a/content/2026/04/hello-world.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Hello World -date: 2026-4-16 -tags: [Web Development, TypeScript, Bun] -excerpt: It's finally here. Eventually this will be a real blog post. -draft: true ---- - -This is just some boilerplate for a first post. I just got done testing my CI/CD for this and will probably update this to talk about my blog's architecture.