The issue with great names is they’re popular.

Last week, OpenAI rolled out a new product called Operator — which, as you might’ve noticed, was also my product’s name. Because we’re a small startup, we can’t exactly throw our weight around, so we’ve decided to rebrand. Our new name is Paraworker.
Why Paraworker? It stands for parallel worker, matching our goal to automate the boring yet important tasks in life (like organization)
Plus, I own the .com domain: www.paraworker.com.

What’s Been Happening?
Paraworker’s been in development for about 11 months. Except for a few friends pitching in, I’ve been tackling the beta solo — which has meant facing more than a few roadblocks. Building agentic AI that runs on your own device instead of some distant server has been surprisingly tricky. And by “tricky,” I mean “occasionally hair-pulling.”
The Beta
Paraworker’s beta will focus on cleaning up your email inboxes. I have 6–8 email accounts, and across them, I’d estimate around 10,000 unread emails. The great thing about Paraworker is that it can clean this up for me without me worrying about missing key information.

With your inboxes, you have control over what happens to certain types of emails. Paraworker lets you pick between two goals, each leading to different recommendations based on what you set:
1. Inbox Zero: This option aggressively eliminates all marketing-related emails.
2. Reduce Clutter: This option is more selective. It deletes promotional emails only when they’ve expired, removes newsletters only when a new one comes in from the same sender, keeps app notifications for 24 hours, and moves all receipts into a designated “Receipts” folder of your choice.
Challenges

You may have heard of Deepseek R1, a next-gen model that’s generating hype for matching OpenAI’s o1 at 95% of the cost. Its appeal lies in its compact size, enabling it to run efficiently on smaller hardware. Running large language models on-device isn’t brand new, but agentic AI (AIs that take actions automatically on your behalf) is more demanding. For the average laptop, memory is a big hurdle. Some larger emails push Paraworker to use up to 20GB of RAM — definitely not ideal if you want to do anything else in the meantime.
I can dial back the memory usage, but that slows things down to the point where it’s worth asking: If it’s so slow, why run it on-device at all? That balance between memory usage and speed has been a major challenge. I’m working relentlessly to make sure Paraworker feels almost invisible while it does its job.
The Build

I’ve gone through over 600 prototypes — yes, for real. Most were small iterations or quick experiments. With tools like Cursor, I can skip the tedious setup and iterate rapidly, even if large language models occasionally “help” by deleting perfectly good code.
Though most prototypes weren’t keepers, each one taught me something new. It convinced me that a lower-memory version is possible without sacrificing speed. After dozens of false starts and a few breakthroughs, I finally think I’ve hit the right balance.
Why On-Device?
Although it would be significantly easier to use a cloud provider like OpenAI or Anthropic, I strongly believe AI systems should run on our devices. Here’s why:

Sure, chatbots are often run in the cloud, but personal data like emails, messages, or notes can be used to train models — or could get repurposed for advertising. Running AI locally means you hold the data.

Large-scale AI can consume serious power. On-device models (especially optimized ones like Deepseek R1) use far less energy. I also can’t rely on LLMs for every tiny task, so I use faster, more efficient NLP methods where possible.

Your emails, messages, and notes contain important information — we can’t afford to miss any of it. That’s why I’ve developed an AI system that always returns the exact same output for the same input. In other words, if you pass in a document (like an email), it will always decide to keep or trash it consistently, and pull out the same key information every time. This consistency has helped me iterate faster and reach a high level of accuracy — something cloud AI services don’t offer yet.
Using cloud models also means total reliance on their availability. When your computer is on, Paraworker is on. Plus, these providers often tweak their models, which can lead to unpredictable outputs. By running everything on your device, Paraworker delivers reliable, stable results.
Final Thoughts

The idea of having decent AI running on our devices, actually doing useful personal work for us, has kept me focused on this goal for the past 11 months. It’s a powerful notion: an AI that handles life and work admin so we can stay in the moment, focus on more important tasks, and feel a little more carefree.
If you want more updates, follow my progress on BlueSky or LinkedIn.
Thank you for sticking with me,
Joel