Restoring communication for people with dysarthria secondary to pontine stroke remains a critical challenge. Intracortical brain-computer interfaces (iBCIs) have demonstrated great potential for speech restoration in people with amyotrophic lateral sclerosis (ALS), with 1-24% word error rates (WERs) on a 125,000-word vocabulary. In pontine stroke, electrocorticography (ECoG) BCIs achieved 25.5% WERs with a smaller 1,024-word vocabulary. Whether intracortical BCI performance improvements extend to people with pontine stroke-induced dysarthria remains unclear. Here, we show that neural activity from a single 64-channel microelectrode array in orofacial motor cortex can predict attempted speech in a person with pontine stroke more accurately than prior ECoG BCI work and comparably to prior iBCI work. We trained a neural network decoder to predict phoneme probabilities from spiking rates and spike-band power as BrainGate2 participant ‘T16’ mimed (mouthed without vocalization) sentences from a large vocabulary. A series of language models converted these probabilities into word sequences. This decoding architecture has remained stable more than two years post-implantation, achieving a median 19.6% WER with a 125,000-word vocabulary and a median 10.0% WER with a 1,024-word vocabulary (a 60.8% reduction over prior ECoG studies). This framework also generalized beyond cue repetition, enabling T16 to communicate spontaneously via the iBCI in a question-and-answer setting with a 35.2% WER. These results demonstrate that brain-to-text decoding from a small patch of cortex can outperform ECoG-based systems in individuals with pontine stroke and is comparable to early speech iBCIs in individuals with ALS.