For my notes site, https://notes.kaideru.net, I decided to use GitHub as a CMS. This was mostly because my notes site is just my Obsidian vault, and it was already backed up to GitHub. I’m using the GitHub REST API to fetch stuff, SvelteKit for the frontend, and Obsidian or Netlify CMS to edit files. The site and Netlify CMS are hosted on Netlify.
Netlify CMS
As I mentioned above, I used Netlify CMS to allow me to edit notes online. In this case, Netlify CMS is hosted with the contents of my Obsidian vault, separate from the front end website files. You can read more about my configuration and customizations for Netlify CMS here.
Setup
Basically, I fetch the raw markdown from GitHub and process it on the server, before sending the server-side-rendered page to the browser. I used Unified (Remark/Rehype) for this, and it involves a few plugins.
const html = await unified()
.use(remarkParse)
.use(remarkGfm)
.use(remarkFrontmatter)
.use(() => (tree) => {
const getFrontmatter = tree.children.find((child) => child.type === 'yaml')?.value || '';
metadata = frontmatter(`---\n${getFrontmatter}\n---`).data;
})
.use(remarkBreaks)
.use(remarkSveltePrism)
.use(remarkPrism)
.use(remarkRehype)
.use(rehypeLinkTags)
.use(rehypeObsidianLinks)
.use(rehypeImageLinks)
.use(rehypeShowLanguageName)
.use(rehypeHighlightSvelte)
.use(rehypeSlug)
.use(rehypeToc)
.use(rehypeAutolinkHeadings, linkOptions)
.use(rehypeStringify)
.process(markdown);
I wrote about half of those plugins myself, to do various custom things like wikilinks, tag links, syntax highlighting for Svelte, proper image links, et cetera.
Sticking points
Images
Because my Obsidian vault is backed up to a private repository, getting images is a bit tricky. As far as I can tell, it’s not possible to simply request an image by setting it as the src
of an <img>
, even if you include the user token. So what I ended up doing is making an image endpoint that requests the raw image data from GitHub, turns it into a blob, and returns the blob with the correct MIME type.
Metadata
Another problem I had was working with frontmatter. Git doesn't store most metadata, and as far as I know, GitHub doesn't either. So I made my own metadata storage in the form of JSON files, generated with python in a post-commit hook. (Tags post-commit hook). This allowed me to store things like tags, title to slug mapping, which notes are private, and backlinks. This might not be the best way to do it, but it works.
Non-markdown files
I wanted some way to expose some non-markdown files, like the CSS file I used to customize Netlify CMS. However, I wanted my metadata files to not be public.
Since I am hosting my files separately from the site, SvelteKit has a fairly easy solution. I created a [slug].[type]
route with a +server.js
file. I added a GET method (shown below) that would only respond if the requested file is on the allow-list.
// src/routes/[slug].[type]/+server.js
import { fetchFile } from '$lib/github';
const ALLOWLIST = ['mobile-overrides.css'];
/** @type {import('./$types').RequestHandler} */
export async function GET({ params }) {
const filename = `${params.slug}.${params.type}`;
if (ALLOWLIST.includes(filename)) {
const result = await fetchFile(filename);
return new Response(await result.text());
}
return new Response('404: Not found', {
status: 404,
});
}
Update
I found the above method to be quite slow, especially for my blog homepage where I'm loading all the blog posts so I can show a preview. Therefore, I have moved the markdown to HTML processing step from the server to my computer. Basically, I have a post-commit hook set up so that every time I edit a blog post and commit it, the hook generates a JSON file for the blog post with the processed HTML and metadata about the blog post.
For the post-commit hook, I basically took all the same logic and plugins and used Node.js to run it on my computer.
// scripts/build-blog.js
import { parseMarkdown } from './markdown-utils.js';
import fs from 'fs';
import path from 'path';
import { fileURLToPath } from 'node:url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const blogList = JSON.parse(fs.readFileSync(path.join(__dirname, '../blog.json'))).posts;
fs.readdir(path.join(__dirname, '../'), (err, filenames) => {
filenames.forEach(async (filename) => {
if (!filename.includes('.md') || !blogList.includes(filename.replace('.md', ''))) {
return;
}
const html = await parseMarkdown(fs.readFileSync(path.join(__dirname, '../', filename)));
const htmlFilename = path.join(__dirname, '../blog', filename.replace('.md', '.json'));
fs.writeFile(htmlFilename, JSON.stringify(html), undefined, (err) => { console.log(err); });
});
});
The parseMarkdown
function parses the markdown with all the plugins shown above. Then, in my post-commit file I just have to put
node scripts/build-blog.js
git stage ./blog
git commit -m "Update blog posts"
Furthermore, using this technique I was finally able to hit a perfect Lighthouse score for my blog, without using some kind of caching. Not to say that caching is bad, but I didn't want to deal with the extra complexity for this project. (This solution might be more complex, but oh well.)