Hi, I'm John Bostanci

I am a PhD student in the Theory Group at Columbia advised by Henry Yuen. I’m interested quantum computing, especially complexity, cryptography and learning theory. Some specific problems that I think about are QMA versus QMA1, Shadow Tomography, the Complexity of Unitary Synthesis problems, among other problems in Quantum Complexity and Cryptography.

Before Columbia, I was a graduate student at the Institute for Quantum Computing at the University of Waterloo, advised by John Watrous.

Email: johnb at cs dot columbia dot edu


  1. An efficient quantum parallel repetition theorem and applications. John Bostanci, Luowen Qian, Nicholas Spooner, Henry Yuen. Preprint. STOC 2024, QIP 2024 Short Plenary Talk [Slides].
  2. Unitary Complexity and the Uhlmann Transformation Problem. John Bostanci, Yuval Efron, Tony Metger, Alexander Poremba, Luowen Qian, Henry Yuen. Preprint. QIP 2024 Long Plenary Talk.
  3. Quantum Event Learning and Gentle Random Measurements. Adam Bene Watts and John Bostanci. ITCS 2024 [Slides, Talk].
  4. Finding the disjointness of stabilizer codes is NP-complete. John Bostanci and Alex Kubica. Physical Review Research 3, 2021.
  5. Quantum game theory and the complexity of approximating quantum Nash equilibria. John Bostanci and John Watrous. Quantum 6, 2022.


In Fall 2022 I was a TA for Introduction to Quantum Computing at Columbia, taught by Henry Yuen.

In Summer 2023 I was a TA for Topological Aspects of Error Correcting Codes at the Park City Mathematics Institute Graduate Summer School, taught by Jeongwan Haah. Click here to see the problem sets and solutions.

Work Experience

I used to work for a start-up derivatives exchange called Kalshi, where I helped design and build the exchange, as well as designed and built most of the connections with external parties including Bloomberg, brokers, and market makers.

I also used to work for Citadel on the Alpha Research and Development team. Some of my projects include X-Alpha (a graph based resource manager for creating terms), and Leonov (a neural architecture that performed better than human modelers on near term alpha).