Continuum Computing

Focus Area I:

Research

Focus Area II:

Software

Focus Area III:

Hardware

Focus Area IV:

Specialized Computing

Research that embodies co-design of software & hardware

Continuum Computing was founded by an AI Research Engineer passionate about collaboration in research, and co-design in products. Bringing together hardware and software is the core prince of Continuum Computing’s Approach.

The Continuum Computing Approach ❊


Hardware is the canvas of all software
Understand the medium, its strengths, weaknesses, advantages, and disadvantages.

01

Software is the experience of hardware
A block of hardware means nothing to an end user but for its ability to accelerate specialized tasks.

02

Co-design is the realization of value
Exceptional products/experiences are made exceptionally from top to bottom, hardware to software. A product without co-design has room to improve.

03

Our Active Research,
more coming soon!

Continuum Computing is actively pursuing 3 key research areas, with an extended queue of future directions.

Homomorphic Encryption for LLMs Operating in Sensitive Domains.

This research investigates the integration of homomorphic encryption with large language models (LLMs) to enable secure and privacy-preserving inference on sensitive data. By allowing computations on encrypted inputs without exposing raw content, this work aims to make LLMs viable for use in domains such as healthcare, finance, and government—where data confidentiality is paramount. Our goal is to balance model utility with rigorous security guarantees, paving the way for trustworthy AI in high-stakes environments.

Bias & the Power of “Please” in Token Output Distributions

This research examines how subtle linguistic cues—like the word “please”—can influence the token output distributions of large language models (LLMs), revealing embedded biases and social conditioning in model behavior. By analyzing shifts in prediction patterns and response tone, the study highlights how politeness, phrasing, and other contextual factors can disproportionately shape LLM outputs. The findings offer critical insights into the interpretability, fairness, and controllability of AI-generated language.

Bespoke Embedded Inference Devices (Hardware & Software).

This project delivers end-to-end embedded AI solutions—custom-designed hardware paired with optimized software—to accelerate inference in specialized domains. From edge deployments in field environments to secure, on-prem operations, our devices are purpose-built for performance, reliability, and seamless integration. By tightly coupling model architecture with system-level design, we unlock efficient, domain-aligned AI capabilities in compact, mission-ready form factors.

Meet our Team,
co-designed from the ground up

Elena leads our work at the intersection of cryptography and machine learning, with a focus on homomorphic encryption and privacy-preserving inference. With a PhD in Computer Science and a decade of experience spanning academia and national labs, she’s passionate about making secure AI usable in the real world. Outside the lab, she’s a firm believer that reproducibility is a lifestyle—not just a guideline.

Dr. Elena Park, Senior Research Scientist

Black and white photo of a man in a dark T-shirt, resting his face on his hand beside a jar with plants.

James designs and builds the brains behind our custom inference devices. With a background in electrical engineering and firmware development, he’s equally at home tuning transformers as he is soldering boards. His current obsession: maximizing throughput on low-power edge accelerators without sacrificing accuracy—or elegance.

James Okafor, Applied AI Engineer

Black and white portrait of a woman with a buzz cut, hand on hip, leaning slightly forward.

Rafael oversees our efforts to squeeze every last FLOP out of modern AI systems. From quantization strategies to custom kernel development, his work ensures our models run fast, lean, and safe—whether on-prem, at the edge, or somewhere in orbit. When he’s not wrangling compiler graphs, he’s probably explaining them on a whiteboard with alarming enthusiasm.

Dr. Rafael Mendoza, Head of Model Optimization & Deployment

Black and white portrait of a person outdoors with long hair and a button-up shirt.

Mira investigates the sociolinguistic quirks of large language models—how subtle prompts like “please” can shift behavior, reflect cultural norms, or reveal hidden biases. With a foundation in linguistics and NLP, she bridges theory and practice to improve model alignment and transparency. She also maintains an internal “LLM weirdness” archive that is both terrifying and delightful.

Mira Abara, Research Fellow

Black and white photo of a woman with tousled hair, wearing a loose tank top and jeans, sitting in angled light.

Get in touch!