TNQE breakthrough slashes quantum data encoding complexity on real hardware
Encoding classical data into quantum states has long been a major hurdle in quantum computing. The process often requires deep circuits and heavy resource use, slowing progress in the field. Now, a new framework called TNQE promises to simplify this challenge with a more efficient approach.
Researchers Guang Lin, Toshihisa Tanaka, and Qibin Zhao developed TNQE to tackle the data encoding bottleneck. Their method starts by breaking down classical inputs into tensor cores, which are then compiled into shallow quantum circuits. Unlike traditional techniques, TNQE avoids long-range entangling operations, keeping circuit depth low and qubit usage manageable.
The team explored two strategies for converting tensor cores into executable circuits. TNQE-full transforms cores into isometries before assembling them into local unitaries, forming a sequential circuit. TNQE-core, on the other hand, prepares each core in parallel using dedicated sub-circuits, resulting in an even shallower overall structure.
Testing on real hardware showed promising results. TNQE-unitary circuits matched the shallowness of amplitude encoding but scaled better for high-resolution images, successfully encoding a 256×256 grayscale picture. On the MNIST dataset, the method achieved a mean squared error of 0.021 with just four layers of circuit depth. Practical demonstrations ran on IBM's quantum processors, including the 127-qubit Eagle (ibmq_manila) and the 133-qubit Heron (ibm_brisbane), proving feasibility on current hardware.
TNQE offers a structured, hardware-friendly way to encode classical data into quantum states. Its ability to reduce circuit depth while maintaining accuracy could accelerate progress in quantum computing. The framework has already been tested on IBM's latest quantum processors, demonstrating real-world potential.