I've managed to rearrange the proof to not explicitly mention any Markov chains at all (although they are very much beneath the surface), and instead rely on a discrete version of the divergence theorem, namely:

Discrete divergence theorem Let $G$ be a directed weighted graph, with each edge $e$ having a "flow" $w_e \geq 0$, and only finitely many outgoing edges from any vertex. Define the "divergence" $\mathrm{div}_v$ at a vertex to be the sum of the flows out of $v$ minus the flows into $v$ (this is also known as the (negative) excess at $v$). Then, for any finite set $\Omega$ of vertices, the sum $\sum_{v \in \Omega} \mathrm{div}_v$ of the divergences in $\Omega$ is equal to the outflow of $\Omega$ (the sum of all the flows from a vertex in $\Omega$ to a vertex outside of $\Omega$) minus the inflow (the sum of all the flows from a vertex outside of $\Omega$ to one inside $\Omega$).

Proof. Sum up all the outflows minus inflows from the vertices in $\Omega$: the contributions of any flow from one vertex in $\Omega$ to another cancels out. Infinite numbers of inflows are not a problem as all such contributions have the same sign. $\Box$

Proof of #1196. Consider the graph whose vertices are the natural numbers, and for which there is an edge from $nq$ to $n$ with flow $\frac{\Lambda(q)}{nq \log^2(nq)}$ for any $n \geq 1$ and $q>1$. The fundamental identity $\sum_{q|n} \Lambda(q) = \log n$ tells us that the outflow from any $n>1$ is $\frac{1}{n \log n}$. A Mertens theorem calculation shows that the inflow $\sum_q \frac{\Lambda(q)}{nq \log^2(nq)}$ is $\frac{1}{n \log n} + O(\frac{1}{n\log^2 n})$. So the divergence at $n$ is $O(1/n \log^2 n)$. In particular the total $\ell^1$ norm of the divergence on vertices $n \geq x$ is $O(1/\log x)$. By the divergence theorem, we conclude that for any finite set $\Omega$ of numbers greater than $x$, the difference between the outflow and inflow of $\Omega$ is $O(1/\log x)$.

Now let $A$ be a primitive set consisting of elements exceeding $x$, which we can assume (by the usual limiting argument) to be finite. We set $\Omega$ to be all the factors of elements of $A$ that exceed $x$; this is a finite set that contains $A$ (we consider every element to be a factor of itself). Every element $n$ of $A$ contributes an inflow of $\sum_q \frac{\Lambda(q)}{nq \log^2(nq)} = \frac{1}{n \log n} + O(\frac{1}{n \log^2 n})$, since every $nq$ lies outside of $\Omega$ thanks to the primitivity of $A$. So the total inflow is at least $\sum_{n \in A} \frac{1}{n \log n} - O(1/\log x)$. On the other hand, the total outflow is at most $\sum_{q,n: n < x \leq nq} \frac{\Lambda(q)}{nq \log^2(nq)}$, which by another application of Mertens theorem is $1 + O(1/\log x)$. Thus, by the divergence theorem, we conclude $\sum_{n \in A} \frac{1}{n \log n} \leq 1 + O(1/\log x)$. $\Box$

Remark 1 One could, if desired, tweak the weights from $\frac{\Lambda(q)}{nq \log^2(nq)}$ to $\frac{\Lambda(q)}{\log(nq)}

u(nq)$, to make the flow perfectly divergence-free (except at $n=1$ where it is a sink of inflow $

u(1)=1$), with an inflow and outflow of exactly $

u(n)$ at every $n > 1$, and remove the requirement that the elements of $\Omega$ exceed $x$; this then gives the exact inequality $\sum_{n \in A}

u(n) \leq 1$.

Remark 2 Every flow graph generates an outflow Markov chain and inflow Markov chain (except at vertices where the outflow or inflow vanishes). For the graph under consideration here, these correspond to the upward and downward Markov chains in previous arguments. But it seems it is really the flow graph which is the fundamental object, not the two Markov chains.

Remark 3 (added later) Perhaps with some tinkering with these flow weights, one can improve on the current best lower bound of $1 - \frac{\gamma}{\log x} + O(\frac{1}{\log^2 x})$ for $\sum_{n \in A} \frac{1}{n \log n}$.

Remark 4 (added later) This may end up being one of these math problems where the a posteriori difficulty of the problem is actually significantly lower than the a priori difficulty. (Other examples of such problems include the finite field Kakeya conjecture and the sensitivity conjecture. Within the field of anatomy of integers, the Hardy-Ramanujan law is of this form; the first proofs consisted of quite lengthy calculations, but the modern proofs are basically a routine application of the second moment method and Mertens' theorem.)

Remark 5 (added later) I had previously stated the opinion that the AI-generated proof had inadvertently highlighted a tighter connection between the anatomy of integers and the theory of Markov chains than had previously been explicitly noted in the literature. Based on further developments, I would like to update that opinion to the following: the AI-generated proof artefact, when combined with subsequent (and mostly human-generated) analysis, has revealed a tight connection between the anatomy of integers and flow network theory that does not, to my knowledge, have any explicit precursor in the literature (although related uses of Markov chains in adjacent settings do appear in that literature). With this reframing, the role of Markov chains is secondary: any flow network canonically generates two useful Markov chains, one for inflows and one for outflows, but it is possible to use flow network theory without direct mention of these flows, by using other tools in the theory such as the divergence theorem or MAXFLOW-MINCUT.

(Workflow: Given the analogy between this problem and Sperner's theorem / the LYM inequality, I started thinking about proofs of the latter. It felt like flows were playing a key role in our arguments, so I experimented first with the MAXFLOW-MINCUT theorem without much success, then hit upon using the divergence theorem instead, taking a cue from how vector fields (which are continuous analogues of flows) are utilized in PDE or in geometric inequalities. I will leave it as an exercise to the reader how to prove the LYM inequality (and hence Sperner's theorem) via the discrete divergence theorem. EDIT: My one use of AI tooling in this process was to help filter out the unviable MAXFLOW-MINCUT approach: I asked ChatGPT to try to reprove Sperner's theorem using this result, and it instead gave me a proof using the Hall marriage theorem which was not appropriate for what I had in mind, causing me to abandon this direction.)