I Renewed My Blog

Published Nov 17, 2025
Updated Nov 17, 2025
6 minutes read
Note

This old post is translated by AI.

##Introduction

I renewed my blog. This is the fourth time I've created a blog as an adult. I've always wanted a cool tech blog atmosphere and haven't been particular about the framework, but every time I create a blog, I become dissatisfied with some feature, and when I realize it's difficult to solve with the existing framework, I get the urge to scrap and rebuild.

This time too, such an impulse was the trigger, but I wanted to conduct an experiment to see what quality of site a non-web engineer like myself could create with Claude Code.

In this entry, I'd like to introduce how this blog was created by utilizing AI agents, hoping it will help you in using AI agents.

##Architecture

I used a template called Sylph. The technology stack includes:

  • Next.js
  • pagefind (in-blog search functionality)
  • MDX
  • TypeScript
  • cmdk (command palette-style UI)
  • Tailwind CSS

The reason I adopted this architecture was not technical requirements, but because it's recently trending and becoming de facto. Trending technologies naturally have a higher chance of AI being able to use them well, and issue resolution is faster. In retrospect, I think adopting Tailwind CSS improved maintainability. A common AI issue is that when you ask AI to fix design, it doesn't properly understand CSS inheritance relationships, and AI modifications don't reflect in the results. With Tailwind, since you manage per unit rather than inheritance, I feel such mistakes are less likely to occur.

##How to Create a Website Using AI Agents

###Non-Web Engineers and AI Agents

In the past six months or so, AI agents like Claude Code are becoming "naturally expected" in software development sites. On the other hand, for engineers like me whose main job isn't web, "where do I even start?" and "what tasks should I assign?" are probably unclear.

In fact, I have almost no knowledge as a web engineer, but I was able to create a site with this level of appearance by fully utilizing AI agents. I use AI agents quite extensively in my main job of bioinformatics too, but surprisingly few people are seriously using AI agents in this field. I feel it's a waste since data analysis and research-oriented fields seem to have great compatibility.

###1. First, "Let AI Handle Requirements Definition"

In development using AI agents, it's said that doing requirements definition properly first is important. However, for non-web engineers, there's a wall of "what should I even write for web development requirements?" I was exactly that type, so I immediately gave up and started by having AI do the requirements definition itself.

First, I opened ChatGPT (GPT-5) and threw rough questions like these:

  • What technology stack would be good for a modern, cool personal blog
  • What features and page structure would make it look "properly done" technically

The point here is not to be greedy and try to do complex things. With recent technology, if you use Next.js or React, and keep styling within Tailwind CSS range, you can create quite professional-looking websites. Moreover, at that level of complexity, AI can easily grasp the whole code, and you can follow the behavior too.

At the initial prompt stage, I repeatedly emphasized "keep implementation simple anyway" and "no need for fancy things, prioritize maintainability." This was to prevent the common AI implementation accident of "before I knew it, mysterious magical code was created."

###2. Spec-Driven Development to Break Down Requirements

Once the technology stack and rough ideal are decided, next is working out what files to create and what features to implement. From here, I passed the baton to "editor-integrated" AI agents like Claude Code or CodeX instead of ChatGPT.

What I adopted here is so-called Spec Driven Development. Using a tool called spec-kit,

  • requirements: The target state and necessary features written down in natural language
  • plans: How to implement that, broken down to pseudocode and structure proposal level
  • tasks: plans broken down into checklist-format detailed tasks

I created these three types of documents together with AI.

Even for a small project like a blog, there are surprisingly many files to actually create. If you try to make a model with small context size work while showing all files at once,

  • Making modifications that ignore previous assumptions
  • Breaking only styles by overlooking dependent files

Such "runaway" tends to happen. So by creating specs (requirements / plans / tasks) in advance, after resetting context, the workflow of "first load requirements → pass only the tasks to do now" becomes easier, and accidents decreased considerably.

By the way, I initially tried to build a blog from zero, but that was honestly quite tough. It took time to reach MVP, and as a result of AI-driven creation, the file structure became too complex, making later modifications very painful. At one point I almost lost all development motivation, but then I learned about the existence of the beautiful template Sylph and decided "let's rebuild based on this," which is this blog.

###3. Benefits of Using Sylph and MDX as Base

Sylph has a very sophisticated design from the start, looking beautiful as-is. Moreover, since the structure is simple, I felt it has good compatibility in the sense that it won't break when AI agents touch it.

Another thing I adopted this time, MDX, is also greatly improving maintainability for tech blog design. With MDX, you can embed React/TypeScript components directly in the article body. This allows you to confine "UI I want to try only on this page," which traditionally required preparing dedicated components per page, to a single article file.

When running a blog, requirements like "I want to experiment just on this page" or "might use it in the future, but want to try it here first" come up quite often, so MDX's flexibility is a point I personally really like.

###4. Spec-Driven Development Also Has Good Compatibility with Research Fields

I think the concept of spec-driven development fits well not only for software development but also for research and development like bioinformatics.

In research too,

  1. Ask questions based on background knowledge and prior research
  2. Form hypotheses
  3. Make analysis plans and experiment plans to verify hypotheses
  4. Execute, evaluate results, and return to the next cycle

Such cycles are run, and this "question → plan → tasks" structure is exactly requirements / plans / tasks. Actually, I quite often have AI agents write experiment note-like documents and subdivide tasks from there to proceed with analysis in my work. I want to organize this story in a separate article eventually.

###5. MVP Is Easy, After That Is the Real Deal

Once preparations are in order, it's all implementation from there. Up to creating MVP (minimum viable thing that works), honestly, you almost don't fail. Rather, you'll be amazed thinking "AI agents are amazing!" while forms take shape smoothly. This is a phase I want many people to experience.

What's really hard is after that. In the phase of accumulating "small fixes" like adding search functionality or adjusting detailed design,

  • Standing one thing breaks another layout
  • CSS keeps increasing haphazardly

Such things happened frequently. If I had basic web knowledge, I might have been able to set up more guardrails, but at the time I was almost in a state of "vibe coding entrusted to AI's momentum," so I ended up having to considerably refactor the overall CSS.

##Detailed Notes

Finally, let me briefly touch on the AI agents I used in this development.

###Types of Agents and Recommendations

The one I used most was Claude Code. Implementation speed is very fast, and it has enough smartness to safely handle refactoring of a certain scale. Being on the Max plan at work allows me to run it without worrying about context limits, which is a big advantage. Since Sonnet 4.5, accuracy has improved further, and including cost, I feel it's quite well-balanced as "the first agent to use." Skills (slash commands) and MCP integration being substantial is also one of the reasons to choose Claude Code.

The next one I used often was CodeX (GPT-5 based agent). There were many cases where bugs that Claude Code just couldn't fix were fixed in one shot. My impression is that somewhat difficult tasks or tasks requiring understanding of existing code are CodeX's strength. In exchange, GPT-5 Medium has somewhat heavier response compared to Claude Code (Sonnet 4.5), so Claude Code for when I want to implement while chatting smoothly, CodeX when I want to settle down and solve difficult problems, became the use division.

Though I ended up barely using it for this blog ultimately, I often used Gemini CLI when I was trial-and-erroring before adopting Sylph. Gemini 2.5 has a very large context size, so it's suited for passing a set of open files together and having it do bug fixes or code reviews. On the other hand, it sometimes introduces small bugs or makes unintended edits, and accuracy-wise I can't deny it's a step behind other models. Even so, being able to try quite a lot for free is a big attraction, and I think it's a good choice for the "want to try AI agents first" phase.

##Conclusion

Above, I introduced the behind-the-scenes of the blog renewal and how I created a website using AI agents.

Even non-web engineers can sufficiently create a blog like this by properly dividing roles with AI agents. I'd be happy if this entry becomes a small step for everyone to "seriously start using AI agents."

Well then, see you in the next article.

    Footnotes
  1. I learned this characteristic of Tailwind from AI while creating this. I probably should have looked at the philosophy/concept of the framework I was using first.

  2. Now I think starting from Gemini3 would be good.