ESI: Maya, I need your help. I don't know how to fix this.

SUNDARESH: What is it? Chioma. Sit. Tell me.

ESI: I've figured out what's happening inside the specimen.

SUNDARESH: Twelve? The operational Vex platform? That's incredible! You must know what this means - ah, so. It's not good, or you'd be on my side of the desk. And it's not urgent, or you'd already have evacuated the site. Which means...

ESI: I have a working interface with the specimen's internal environment. I can see what it's thinking.

SUNDARESH: In metaphorical terms, of course. The cognitive architectures are so -

ESI: No. I don't need any kind of epistemology bridge.

SUNDARESH: Are you telling me it's human? A human merkwelt? Human qualia?

ESI: I'm telling you it's full of humans. It's thinking about us.

SUNDARESH: About - oh no.

ESI: It's simulating us. Vividly. Elaborately. It's running a spectacularly high-fidelity model of a Collective research team studying a captive Vex entity. deep does it go?

ESI: Right now the simulated Maya Sundaresh is meeting with the simulated Chioma Esi to discuss an unexpected problem.

[indistinct sounds]

SUNDARESH: There's no divergence? That's impossible. It doesn't have enough information.

ESI: It inferred. It works from what it sees and it infers the rest. I know that feels unlikely. But it obviously has capabilities we don't. It may have breached our shared virtual workspace...the neural links could have given it data...

SUNDARESH: The simulations have interiority? Subjectivity?

ESI: I can't know that until I look more closely. But they act like us.

SUNDARESH: We're inside it. By any reasonable philosophical standard, we are inside that Vex.

ESI: Unless you take a particularly ruthless approach to the problem of causal forks: yes. They are us.

SUNDARESH: Call a team meeting.

ESI: The other you has too.