Research

The central theme of my research is the development of robust, scalable, and provable numerical algorithms that overcome the “curse of dimensionality” in modern scientific computing. As we transition into the exascale era, the primary bottlenecks in simulation and discovery are no longer just floating-point operations, but the massive communication costs and super-linear scaling inherent in high-dimensional operators. My research program addresses these challenges through three integrated pillars:

Hierarchical Compression & Geophysics

I seek to unify Hierarchical-matrix (H-matrix) theory with Tensor Decompositions to create “full-stack” compression frameworks. By exploiting the low-rank structure of Green's functions and high-dimensional kernel matrices, we attain O(N) operations for systems that were previously computationally intractable. This work can provide the mathematical backbone for operator learning, replacing black-box neural networks with transparent, physics-respecting approximations.

  • Parametric Hierarchical Matrix Approximations to Kernel Matrices, A. Khan, C. Chen, V. Rao, A. K. Saibaba, arXiv 2025. [arXiv].

  • A Simplified Fast Multipole Method Based on Recursive Skeletonization, A. Yesypenko, C. Chen, P.G. Martinsson, JCP 2025. [code]

  • An Algebraic Sparsified Nested Dissection Algorithm using Low-Rank Approximations, L. Cambier, C. Chen, E.G. Boman, S. Rajamanickam, R.S. Tuminaro, E. Darve, SIMAX 2020. [code]

  • A Robust Hierarchical Solver for Ill-conditioned Systems with Applications to Ice Sheet Modeling, C. Chen, E. G. Boman, S. Rajamanickam, R. S. Tuminaro, E. Darve, JCP 2019.

Randomized Numerical Linear Algebra (RandNLA)

Reliability is paramount in scientific discovery. My work in Randomized Numerical Linear Algebra focuses on moving beyond empirical speedups toward provable stability. By developing adaptive sketching and pivoting methods, we ensure that randomized solvers maintain the same precision as their deterministic counterparts while providing the massive speedups

  • Robust Blockwise Random Pivoting: Fast and Accurate Adaptive Interpolative Decomposition, Y. Dong, C. Chen, P.G. Martinsson, K. Pearce, SIMAX 2025. [code]

  • Adaptive Parallelizable Algorithms for Interpolative Decompositions via Partially Pivoted LU, K. Pearce, C. Chen, Y. Dong, P.G. Martinsson, NLAA 2025. [code]

  • RCHOL: Randomized Cholesky Factorization for Solving SDD Linear Systems, C. Chen, T. Liang, G. Biros, SISC 2021. [code]

  • Fast Approximation of the Gauss-Newton Hessian Matrix, C. Chen, S. Reiz, C. Yu, H.J. Bungartz, G. Biros, SIMAX 2020.

High Performance Computing (HPC)

Theoretical breakthroughs must translate to wall-clock performance. Leveraging my experiences at Sandia and LLNL, my group develops communication-avoiding algorithms optimized for GPU-accelerated architectures. We apply these kernels to challenging problems, such as the nonlinear Stokes equations in ice sheet modeling and dislocation dynamics. Our mathematical advances solve some of the most pressing physical modeling barriers.

  • Parallel GPU-Accelerated Randomized Construction of Approximate Cholesky Preconditioners, T. Liang, C. Chen, Y. Yaniv, H. Luo, D. Tench, X. Li, A. Buluc, J. Demmel, arXiv 2025. [arXiv].

  • Scalable KNN Graph Construction on Heterogeneous Architectures, W. Ruys, A. Ghafouri, C. Chen, G. Biros, ACM TOPC 2025. [code]

  • An O(N) distributed-memory parallel direct solver for planar integral equations, T. Liang, C. Chen, P.G. Martinsson, G. Biros, IPDPS 2024. [code]

  • Solving Linear Systems on a GPU with Hierarchically Off-Diagonal Low-Rank Approximations, C. Chen, P.G. Martinsson, SC 2022. [code]