Field Notes Week 172/520: Who Owns Your Digital Ghost?
These notes are shaped by what I’m seeing, building, and discussing as our physical and digital lives continue to converge.
Welcome to this week’s Field Notes, a 10-year project of mine documenting humankind’s digital transition from the field. These notes are shaped by what I’m seeing, building, and discussing as our physical and digital lives continue to converge.
- Ryan
(Connect with me on LinkedIn)
News is surface-level. Signals live underneath. This section captures developments that hint at deeper shifts in how digital systems are being built, governed, and adopted — often before they’re obvious in the mainstream narrative.
This week, a cluster of AR and smart-glasses stories pointed in the same direction. Not toward the metaverse, which has mostly faded as a useful organising term. And not quite toward mainstream replacement of the smartphone either. What stood out instead was something quieter: the interface layer is starting to settle. The operating systems are maturing, the design languages are being named, and the privacy tensions are becoming harder to hide. The category still feels early. But it is becoming legible.
Pico is preparing the software layer before the hardware arrives
Pico used the past month to unveil Pico OS 6 and preview Project Swan, its next flagship mixed-reality device, which it says is planned for global launch in late 2026. That sequence matters. The company is not just teasing hardware. It is showing the environment that future hardware will run inside. In categories like this, software previews are often a way of asking developers and partners to begin orienting early, before the product itself becomes concrete. (Road to VR)
What lingers is the form factor implied by the move. Project Swan points toward a lighter, more glasses-adjacent future, while OS 6 suggests Pico is trying to make the operating layer visible now rather than later. The signal is that the battle may be shifting away from headsets as isolated devices and toward ecosystems preparing for something smaller, more ambient, and potentially more wearable.
Google is designing the grammar of glasses before the market fully forms
Google’s Glimmer is the company’s named design language for the interfaces in its coming Gemini smart glasses with transparent heads-up displays, and eventually fuller AR glasses. That is a small but important step. New platforms tend to become real not when the hardware arrives, but when the interface rules start to stabilise. Google is effectively trying to define how glanceable information, app views, and interaction should work when computing moves into the line of sight. (UploadVR)
This feels more structural than promotional. Phones had to learn taps, swipes, notifications and cards. Glasses will need their own equivalents. By naming Glimmer now, Google is signalling that AR and HUD glasses are no longer being treated purely as speculative hardware. They are becoming a design problem, which usually means companies expect the category to persist long enough for habits to form around it.
Meta’s glasses are starting to behave more like a platform
Meta’s Ray-Ban Display received its first major OS update this month, adding widgets, two new minigames, a new app and other features. On paper, these are small additions. In practice, they suggest a shift from glasses as a narrow capture-and-assistant tool toward glasses as a lightweight computing surface with repeatable behaviours and its own software cadence. (UploadVR)
That is what stood out. A platform does not begin when everything works. It begins when updates, interaction loops, and recurring surfaces appear often enough that people start to expect them. The addition of widgets and other persistent features suggests Meta is testing whether glasses can become a place users return to, not just a device they occasionally invoke.
The privacy story is moving from background concern to product condition
The sharpest counter-signal came from reporting that subcontractors were able to see intimate Meta AI visual queries from the company’s smart glasses, sometimes accidentally triggered. The report, cited by UploadVR, raised concerns not only about bystanders and ambient capture, but about the owners themselves and what kinds of context their devices may be sending into human review systems. (UploadVR)
This matters because it shifts privacy from a theoretical objection to an operational one. Smart glasses are not just another gadget class. They are a way of collecting, interpreting, and routing lived context. Once that context passes through review pipelines, subcontractors, and AI systems, trust stops being a policy page issue and starts becoming part of the product itself. If glasses are going to become ambient infrastructure, then data handling will sit much closer to the centre of the category than the industry often admits. (UploadVR)
What stood out
Taken together, these stories suggest that AR is narrowing into something more specific. Less world-building. More overlays. Less immersion as spectacle. More computing at the edge of attention. Pico is preparing the environment before the device arrives. Google is naming the interface conventions. Meta is testing habitual software behaviour. And the privacy model is beginning to show strain under real use. That feels like a category moving from imagination into conditions. Still early. But less abstract than it was.
What it is
This week’s watch is a simple preview of Google’s emerging smart-glasses direction. But the more useful way to read it is as an early look at how the company thinks computing should behave once it moves into the line of sight. The glasses still feel early. The field of view is limited. The interaction model is not yet settled. But that is part of what makes the video useful. It shows the category before the conventions have fully hardened.
What stood out
What stood out is that this does not feel like a phone replacement. It feels more like a system for brief overlays. A prompt. A translation. A small layer of context appearing just long enough to be useful, then receding again. That is a different ambition from the older AR framing, which often leaned on immersion, spectacle, or the idea of entering a parallel digital space. Here, the logic feels lighter and more ambient.
That is why Google’s work on Glimmer matters. The video helps make visible what a design language for smart glasses might actually mean in practice. Not just menus and visuals, but the timing of information, the restraint of the interface, and the question of how much computing can sit at the edge of attention without overwhelming it.
Why it lingers
It lingers because it suggests the category may be narrowing into something more specific and more plausible. For years, AR has struggled with being either too ambitious or too awkward. This feels like a quieter path. Less world-building. More contextual assistance. Less immersion as destination. More computing as a thin layer over the real world. That may prove to be the more durable direction.
Still unresolved is whether people will actually want to live with this kind of interface, and under what conditions. Once computing sits inside the line of sight, design stops being only about utility. It becomes a question of interruption, trust, and social tolerance.
Digital assets now sit less as an idea and more as infrastructure in progress. As physical and digital life continue to converge, money and assets are doing the same. What was once framed as “crypto” is increasingly showing up as rails, balance sheets, and policy conversations.
🔥🗺️Heat map shows the 7 day change in price (red down, green up) and block size is market cap.
🎭 Crypto Fear and Greed Index is an insight into the underlying psychological forces that drive the market’s volatility. Sentiment reveals itself across various channels—from social media activity to Google search trends—and when analysed alongside market data, these signals provide meaningful insight into the prevailing investment climate. The Fear & Greed Index aggregates these inputs, assigning weighted value to each, and distils them into a single, unified score.
This section captures developments at the edge of digital systems. New interfaces, tools, and capabilities that feel early, unfinished, or slightly ahead of their moment. I’m less interested in what’s impressive today and more interested in what might quietly reshape how people work, coordinate, and interact over time.
The digital ghost is not a product. It is a worldview
The frontier tech story this week is not really about a patent. It is about what a platform believes a person is.
On last Thrusday, Karaitiana Taiuru published a Te Ao Māori analysis of a Meta patent granted by the United States Patent and Trademark Office on 30 December 2025. The patent, titled Simulation of a User of a Social Networking System Using a Language Model, describes a system that could train an AI model on a user’s posts, messages, voice data, browsing patterns and platform behaviour, then deploy that simulation as an active bot after the user’s death. According to the analysis, the system is designed not just to respond when contacted, but to engage proactively across the platform, including direct messages and feed interactions, with the user’s identity attached.
Meta has reportedly said it has no plans to move forward with this example. That may be true in the narrow sense. But the patent remains granted intellectual property. It exists as an available capability, and that matters because patents are often less useful as predictions of immediate release than as signals of institutional imagination. They show what a company thinks is worth securing.
Taiuru’s argument is that, from a Te Ao Māori perspective, the problem is not merely ethical discomfort. It is a direct conflict with tikanga, with the sacred boundary between the living and the dead, with the role of tangihanga in guiding the wairua onward, and with the tapu attached to a deceased person’s voice, words, image and relational presence. In that reading, a posthumous simulation is not remembrance. It is the commercial recirculation of what should have been released.
Even outside that framework, the patent forces a harder question than the usual AI debate allows. Most discussions about synthetic companions, memorial bots, or grief tech still assume the core issue is consent. Did the person opt in. Did the family agree. Were the settings clear. Those questions matter, but they do not go far enough. The deeper issue is what kind of institution should be allowed to intermediate the boundary between memory and simulation in the first place. A digital ghost is often described as an emotional technology. That feels too soft. It is really an infrastructure question. It asks who stores the traces of a life, who trains on them, who controls the model, who benefits from its activity, and whether a platform should be able to preserve relational behaviour as a monetisable asset after death.
That is what makes this story feel frontier. Not because it is futuristic, but because it sits just beyond the point where our legal and social categories still work cleanly. New Zealand’s Privacy Act 2020 applies only to living individuals, and Taiuru argues that posthumous digital identity remains largely unaddressed in law. His piece also situates the issue within Māori data sovereignty, CARE principles, Te Tiriti, and the question of whether corporate simulations of deceased people amount to a new form of extraction.
What stood out most was the clarity of the underlying commercial logic. In the analysis, Meta’s patent is framed as solving an engagement problem. The person dies. The account goes quiet. The platform experiences a loss of activity. The technical response is to simulate continuity. In that frame, death appears not as a sacred transition, nor even as a social rupture, but as a metric gap. That may be the real signal.
When a system begins to treat human absence as something to be engineered away, it has already made a deeper decision about what a user is. Not a citizen. Not a relation. Not even a customer, exactly. More like a node of behavioural output whose value can be extended beyond the body that produced it. Because once a platform can simulate the dead, it becomes easier to see what it is trying to do with the living.
“The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions and god-like technology.”
E. O. Wilson
E. O. Wilson was an American biologist, naturalist, and one of the world’s leading authorities on ants. He spent decades at Harvard and became widely known not just for his scientific work, but for his attempts to explain human behaviour, social organisation, and biodiversity in long historical terms. That is what makes this line worth returning to. It does not describe a temporary glitch. It describes a structural mismatch. Our tools keep compounding. Our institutions revise slowly. Our instincts barely move.








