Here is the version of the idea I can defend today: software that can change with the user instead of staying fixed.
Project: Wiggly Works is an experiment around that idea. The larger goal is to explore a different kind of software substrate, one that is more editable, more flexible, and better suited to systems built with AI in the loop. This is not a polished product effort. It is an early test. Some assumptions may hold up. Some may need to be refined. Some may fail cleanly. Right now, what matters is getting concrete enough to find out.
A fair objection here is that all software changes. Of course it does. Every app evolves if people keep working on it.
What feels different about this project is not the fact of change, but the way change is meant to happen. The idea is not just a codebase that gets edited over time. It is a system with a more explicit internal structure, something closer to a model of itself, so that new behavior can be proposed and shaped in a more organized way than opening a pile of files and editing them.
That structure is still not fully defined, and I do not want to pretend otherwise. But that is part of what this project is trying to discover.
What We Are Trying to Validate
The core idea is not that software should mutate by itself in a chaotic or fully autonomous way. It is that software can be designed to evolve through directed changes, with AI helping carry them out under human guidance.
To test that, we want the smallest product that can put real pressure on the idea. The first experiment is still being dialed in, but the current center of gravity is replication: can the system recreate itself, or recreate itself with a small modification, while still remaining coherent? That sounds narrow, but it reaches the heart of the project. If a system can inspect its own structure, reason about it, and produce a modified version without falling apart, that tells us something worth knowing.
The Technical Starting Point
We are deliberately starting with a simple architecture. The current plan is a Rust engine, a separate renderer, a web-first interface, and SQLite for persistence.
Rust keeps coming up for the engine not because it is fashionable, but because the project needs a core that is explicit and predictable about state and change. If the whole point is controlled evolution, the foundation needs to be stable enough to inspect.
We also want the engine separated from the renderer. The part that defines and evolves the system should not be too tightly coupled to the part that displays it. That split gives the project room to change interfaces later without rebuilding the core. For the first renderer, web is simply the fastest way to iterate and inspect what is happening. For persistence, SQLite is the practical starting point: lightweight, portable, and good enough for early local experiments.
Why We Are Not Starting With a Graph Database
A graph model is central to the system, but a true graph database is not the first move.
The model itself is graph-like: nodes, relationships, and structures that can change over time instead of being locked into one rigid schema. That flexibility matters. If the software is meant to adapt meaningfully, the representation underneath it has to adapt too.
Still, it feels too early to commit to specialized infrastructure before the mutation model is proven. We can model enough of the graph shape on top of SQLite to learn what the system needs. If the experiment later earns a dedicated graph database, that will be a better time to take on the extra complexity.
Coherence Over Novelty
One of the main requirements is coherence. A self-modifying system is only interesting if it can change without dissolving into confusion.
That is why versioning matters here as part of the model, not just as tooling around the edges. The system should be able to track what changed, how it changed, and what state it believes itself to be in. That historical memory is part of how we inspect drift instead of merely hoping it is not happening.
A Second Experiment
There is also a meta-layer to this project. Beyond the experiment in self-modifying software, there is another experiment in how the thing gets built.
The project is leaning hard on AI during development: writing code, shaping architecture, documenting decisions, drafting content, and helping move the work forward. But that does not mean handing over control. The human is still the director. AI is not the product owner. A better picture is a small team of agents taking on roles such as engineer, advisor, reviewer, or author, with a human deciding what is useful and what direction the project should take.
Even if the product idea changes, that part of the experiment may be worth keeping. It is useful on its own to learn how far a human-directed, AI-heavy building process can go in practice.
How We Will Judge Progress
We are not trying to prove a grand theory in one shot. We are trying to build concrete experiments and judge whether they are actually teaching us anything. For now, the questions are intentionally plain:
- Does the system still function?
- Does the change improve or at least preserve the intended behavior?
- Does the internal model remain consistent?
- Can we understand what changed and why?
Those are basic questions, but basic is enough to start. Better evaluation methods can come later. First the project needs something real enough to evaluate.
Why Everything Starts Locally
At the start, all of this runs locally. That is partly about safety and partly about speed. It keeps unstable experiments away from production systems, and it makes it easier to change, inspect, reset, and try again.
Local-first also fits the nature of the project. If different instances eventually evolve in different directions, that may be part of the point rather than a problem to eliminate immediately. For now, local development gives the project room to learn without pretending it is ready for wider deployment.
Where This Goes Next
This is not a final thesis. It is a starting point.
The immediate goal is to turn these ideas into a small working experiment, learn from what breaks, and keep refining from there. Some design choices will change. The first product shape may change too. That is expected. The value, at this stage, is not certainty. It is getting out of the abstract and into something testable.
We want to learn what happens when software is built to evolve, what kind of structure makes that feasible, and how far human-directed AI can take us in building it.






