Why I Assert That Writing a Blog Still Matters in the AI Era

Published Dec 6, 2025
Updated Dec 6, 2025
9 minutes read
Note

This post is translated by AI.

##Nobody Reads This. Yet I Still Write

Even if I were to preach here on this blog, saying things like "Let me teach you effective ways to use AI!" or "The latest ggplot techniques revealed!", how many people would it actually reach? The answer is obvious—almost no one.

Why? Because in this AI era, fewer and fewer people will find their way to this blog through Google searches.

Today's AI has made web searching commonplace, and the advantage of search-focused AI like Perplexity has disappeared. Nobody goes out of their way to visit my blog to learn how to use a tool. If they want to know how to use something, they'll ask AI first. I would do the same.

That's why no human will read this blog. (If you are human, thank you for reading. I've written something interesting for you, so please stay until the end.)

Knowing all this, why am I writing this?

For about a year, I've been grappling with this question. What does it mean to write a blog while AI is evolving so rapidly? After repeated self-questioning, I've finally arrived at one conclusion.

The answer is simple. I want this text to become "fertilizer for AI."

Information on the web is all being used for training to the extent it's usable. It's said that web information as training data has been exhausted. In other words, my thoughts and thinking have been learned by AI without my knowledge, involuntarily becoming part of its knowledge. It's just one piece among countless disposable content. It may seem pointless at first glance, but it will definitely live on somewhere forever. That's why I've completely made my peace with it. The text I write is fertilizer for AI.

Have you heard of the "Sibyl System" from the anime PSYCHO-PASS? PSYCHO-PASS depicts a Japan where crime has been eradicated by a supercomputer that evaluates criminals and potential criminals using a numerical value called the "Crime Coefficient." This supercomputer is called the Sibyl System, and it's a world governed in a certain sense by Sibyl. However, as the story progresses, it's revealed that Sibyl's true nature is an organic system networked from the brains of numerous criminals whose crime coefficients couldn't be measured—an ironic truth that "criminals are finding criminals." The brains of highly intelligent criminals simply serve as part of the computational processing of a massive system. Yet they retain their sense of self. They find fulfillment in being part of Sibyl and have ambitions to make the system even better.

Sibyl System
From WIRED: How to Rule the World—Takuya Matsuda Interprets the Anime 'PSYCHO-PASS' Through Nick Bostrom

That's my image. The words and text I spin will become part of the Sibyl System. Even after I die, my thoughts will persist eternally as part of a vast intelligence.

That said, I imagine many people resist the idea of "writing for AI." I do too. While accepting the reality of becoming fertilizer for AI, I will not stop making the effort to write text that humans find interesting. Because I believe that is an endeavor that enhances my value as a human being.

So, if I'm writing for humans, what should I write? The answer is clear: insights that are uniquely "yours." Nobody wants superficial book reports. Leave that to AI. What do you really think? I believe we're in an era where "thinking ability" is extremely important.

That's why I'm making my stance clear here. By deliberately writing this blog, from a perspective only I can offer, I want to share my thoughts on how we specialists should live in this AI era.

##Two Years That Redefined "Thinking": The Trajectory of AI Evolution

Let me review the evolution of LLMs here to align understanding with readers. Many people's perception seems to still be stuck at "transitional AI," and I'm concerned that the foundational understanding of AI's capabilities might be misunderstood.

Some people say, "AI is just a tool for mass-producing blog posts, right?" and underestimate its essential intelligence. Others say, "AI hallucinates so much that it can't be trusted in practical work yet," with their recognition stuck at old accuracy levels. Let me be clear: both are wrong. As of 2025, AI has become quite practical and can generate high-quality text, and it's beginning to surpass human intelligence. To correctly recognize where we stand—what kind of "intelligence" we're facing—let me look back on these turbulent two years.

###The Excitement and Limitations of GPT-4

Two years ago, GPT-4 arrived after GPT-3. I remember being excited thinking that so many jobs might be transformed. But in reality, what it could and couldn't do was quite limited. There were various issues like hallucinations and context being ignored. At this point, AI was still just a "convenient tool."

###The Disillusionment with RAG and Integration into Agents

Last year (2024), things became much more refined. GPT-4 Turbo, Claude 3, Gemini, and others emerged. And last year's trend was RAG (Retrieval Augmented Generation). It's a mechanism for pulling in external information to have AI answer.

For a time, there was a period of disillusionment with RAG—"it's not as accurate as we hoped." There were issues like skipping over information in the middle of context. However, now that disillusionment period has been overcome, and RAG has become an indispensable part of AI agents. While it's no longer hyped as a standalone technology, it has been quietly but surely integrated as technology for agents to properly reference memory and use it as a tool behind the scenes.

###2025: The Reasoning Breakthrough

Another year passed, and it's 2025. o1 brought the era of "Reasoning." This was decisive. AI acquired not just search and summarization, but the power to "think (reason)." Reasoning is a method where, instead of immediately outputting a response to a prompt, tokens are consumed as a thinking process, and that thinking process is integrated at the end to produce high-quality output.

Reasoning completely broke through what was said to be the scaling law's limit, and LLM capabilities improved dramatically. Now, both deliberate thinking mode and quick answering mode are available from every platform.

Before and after Reasoning, AI's capabilities are completely different. If you haven't properly touched AI since last September, you should try Reasoning models immediately.

###The End of the Era of Being Picky About Models, and "The Ultimate Tool"

As of late 2025, Reasoning has become commonplace, and even open models can do Reasoning. Now, any model can meet the minimum level of work AI should do in daily tasks. You no longer need to think hard about it—just use whichever model you like.

A year ago, you needed to use different models for different purposes. But now, every model exceeds the minimum threshold and handles both coding and reasoning. Last year, "the humans using AI" were being tested, but now anyone can access high-performance models very cheaply. That's why I feel services like Poe that let you "try various models" are finishing their role . In this way, AI has evolved from "a tool for trial and error" to "a partner that thinks and completes tasks."

###AI Agents Killed Engineers

2025 was the year of AI agents. AI agents that could handle both coding and reasoning emerged, and engineering jobs dramatically decreased.

In June, Claude Code became unlimited for $100-200/month, changing the trend from developing agents while conserving tokens to consuming tokens thoroughly to think through problems until they're solved. Claude Code and similar AI agents began to be used explosively by many engineers. After that, similar services like Codex, Cursor CLI, and Gemini CLI emerged, and now practical AI agents are cheaply available to everyone.

Agents like Cline existed by the end of last year, but there were various issues like not being able to fix bugs satisfactorily. Now Claude Code can build an entire website from scratch perfectly, or complete advanced research projects with almost no stress.

Because AI agents can now write code at quite a high quality level, junior-level engineer job postings have reportedly decreased significantly. Of course, the layer that manages projects is still needed, but the work of writing code has seen considerable efficiency gains through AI.

##Bioinformaticians Can't Afford to Just Write Code

Now that AI has evolved from "a student who just knows a lot" to "a somewhat smart working adult" who can reason and code, what should humans do? I'll use my specialty of bioinformatics as an example, but I think this applies to all engineers and specialists.

In conclusion, bioinformaticians can no longer afford to just write code.

In engineering circles, it's said that jobs writing simple code are declining rapidly. Only designing and building complex systems will remain as human work. AI writes simple code, and "throwaway code" written by AI is often sufficient for minor tasks. "I can use R a bit" or "I can analyze gut microbiome"—that no longer impresses anyone now. "You could just have AI write that code"—that's all it comes down to. It's harsh, but Claude Code at $100/month can write better code than a junior researcher making 5 million yen annually.

###What's Required Is Research Management Ability

So what abilities are required of humans next? Not "I can do analysis," but what statistical methods can you use to gain insights from that analysis? How do you interpret the data in front of you? Or in terms of experimental planning, what experiments will you design and what will you verify? In other words, research management ability.

I believe bioinformatics is ultimately just a tool. It's merely one technology for achieving research goals. Hypotheses derived from bioinformatics must be verified through experiments. Creating a DL classifier isn't the end—research means taking it all the way to human trials and clinical trials to see if it can withstand social implementation.

###Don't Be Smug About Environment Setup

Recently, I've had more opportunities to try deep learning models and life science generative models. In the pre-AI world, it would have taken me an enormous amount of time to venture into these areas without having graduated from a CS department.

In the past, looking at GitHub tutorials, setting up the environment, and getting the model running was a job in itself. But I've started thinking that even that is now AI's job. If you tell an AI agent (Claude Code, Codex, etc.) to "keep trying," it will solve most environment-dependent issues. Until six months ago, I was going back and forth with AI over error logs. Now, depending on the case, AI agents increasingly solve things completely on their own. If you're smug just because you "can set up environments," you'll really lose your job. In fact, if you're still spending time on environment setup, you can't keep up with today's pace.

In the past, there were stories like "it took a month to write the code and it was hard, but I did it." Not anymore. "I thought it might work so I did it," and results come out the next day. Only computation time is the bottleneck, and the time for "writing code" or "researching what technology to use" has completely ceased to be a bottleneck thanks to AI.

In fact, I've felt this reality keenly in hiring interviews. Because there are few bioinformaticians, it's hard to see how others work, but when I ask "Are you using AI?", most answer "No." I could count on one hand those who said they use it properly. Even in coding tests, there are really few people who can present in a format like "I thought X from your data, so I tried this kind of analysis" and talk through their own thinking process. I wonder if Japanese bioinformatics researchers have been landing jobs just because they could write a little code. I think many have spent their time on environment setup and learning how libraries work, and bioinformatics, which should be a tool, has become their goal.

##So What Should We Do in Bioinformatics?

Criticism alone doesn't help. Let me also share my constructive thoughts on "what specifically should we do in the bioinformatics field."

###Don't Move Your Hands, Do Requirements Definition

You'll want to jump in and start working, but deliberately don't. The first thing we should do now is requirements definition. Write a proper "research requirements document" for the agent. Haste makes waste. If you prepare properly, AI agents will produce quite high-quality work. Preparation is 90% of it.

###The Actual Workflow

Give context like the following to agents like Claude Code or Codex:

  1. Have it create plan.md (research plan)
    • Build research planning based on things like knowledge.md. Have AI think about what to do and how.
    • At that time, gather information in an environment where web search is natively available.
  2. Have it create experiment.md
    • Based on the plan, have it create specific experimental procedures.
  3. Have it plan and execute coding
    • Actually have it create Jupyter Notebooks (.ipynb) for data analysis.
  4. Human feedback
    • Humans review the results and provide feedback. Write things like "this kind of analysis was needed" or "this was a bit different from what I intended."
  5. Discussion and loop
    • Record discussions of results in separate markdown files.
    • Repeat this phase and complete the final report.

If you just properly handle the agent's "steering"—like where files are located and when to commit to Git—the agent is now in a state where it can conduct research autonomously. Throw in as much information as possible, and then use Claude Code, Codex, or Gemini CLI for all thinking processes.

###The Importance of Human Feedback

Another important thing is human feedback. The deliverables AI agents produce are very good in the early stages when context is small, but as context becomes massive, they tend to gradually fall apart. That's where humans need to hold the reins.

##Conclusion

At the beginning, I described my writing as "fertilizer for AI." I spoke of my resolve to become part of the Sibyl System. It was a conclusion I finally reached after about a year of self-questioning.

But remember. The brains that comprise Sibyl weren't just parts. They retained their sense of self and had ambitions to make the system better.

My reason for writing this blog is the same. Not just to be consumed as training data, but I hope my thoughts will remain as "quality" within AI's vast intelligence. Being part of Sibyl while having the will to keep updating Sibyl. That is the meaning of writing a blog in the AI era for me.

This blog may be read by no one. I'll write anyway. I'll keep spinning text that humans find interesting to read, from my unique perspective. Because in an era where AI can do everything, having an answer to the question "what do you think?" is the last, and greatest, value of being human.

There's no time to be smug about writing code. There's no time to abandon thinking. The work left for us is "to think," and to put those thoughts into words and leave them behind.

    Footnotes
  1. LLMs only train for one epoch, so training data is only seen once. Despite this, LLMs can occasionally generate training data verbatim, so perhaps my writing won't be diluted and will remain as a distinct piece of knowledge.

  2. Open models like DeepSeek, Kimi, or Qwen can probably be tried for free. ChatGPT or Gemini's paid models have the highest performance, so subscribing is better. You can also try Gemini 3 on Google AI Studio, which I recommend.

  3. What to use: If you're a Google Workspace user, use Google (Gemini); if you're a Microsoft 365 user, use Copilot Pro; for individuals, ChatGPT is fine. You should definitely subscribe monthly and be ready to switch to the best service.

  4. Gemini 2.5 Pro breakthrough: Personally, Gemini 2.5 Pro/Flash was 2025's breakthrough. Its affinity with Google Workspace is high, and I feel it's finally become a choice that can surpass ChatGPT at the business level.

  5. The fact that virtual environment management tools like uv have become de facto standards has contributed greatly.