Pricing and Agents
PARAMUS Core
Central LLM / Orchestrator – required
- Chat dialog application
- Save/Restore/Resume Chats
- Configuration for agents
- Per user
- Include project/tasks management
LOCAL LLM OR LICENCE FOR LLM NEEDED
Open
Cheminformatics
Agent
Free computational tools for molecules and reactions
- RDKit
- Chempy
Research Data Agent
Workflow from ELN, LIMS, SDMS, LES
- Read experiments
- Read out reactor timeseries
- Find LIMS calibrations
SYSTEMS: (Ask for custom implementations)
- ChemInf-EDU (ELN and LIMS)
Synthesis Agent
Devise synthetic routes for chemical compounds
- Proposes pathways for molecule synthesis, evaluates feasibility
- Aligns synthesis plans with LLM’s strategic objectives
LICENCE FOR IBM RXN NEEDED new: ASKCOS (planned)
Literature Agent
Aggregate and read existing chemical research, also online DBs
- Searches databases, google, chemspider, pubchem
- Updates LLM with latest research, requests summaries on topics
API-KEYS (and quotas) NEEDED
Computational
Chemistry Agent
Quantum chemical calculations
- Runs molecular dynamics, quantum chemistry calculations, predicts kinetics and thermondynamics, calculate theoretical spectra
- PSI4
- ORCA
unstable (experimental)
LICENCE FOR ORCA NEEDED
Helper
Agent
Can do calculations and simple Document reading
- Calculator (Python)
- RAG
Analytical
Agent
Not yet available
Open for co-development!
We offer consulting in obtaining the software, the licenses and the software keys.
Run on different LLM providers
The CORE as well as the agents need their own LLM runtimes subscriptions! This costs are additional to PARAMUS but give you the freedom to balance your spendings. You have the choice how much power PARAMUS will have in which field of expertise.
Mix them up for a well balanced system
You should configure different runtimes for different agents! Just like your team members have different skills (and salaries). This is just as comparable: By assigning agents to providers differently, we have achieved incredible results in optimizing the performance of PARAMUS (as a whole system) in terms of response time, response quality and total costs.
For example: run the free Calculator agent with non-critical, cheap but VERY FAST(!) xAI-Grok2 and run in contrast the PARAMUS Core with a openAI-GPT4o. To our experience the PARAMUS CORE (supervisor) should run with the best model you can effort. Simple agents like the calculation Agent can run „more stupid“.
Runtime LLM available:
- OpenAI
- Google Vertex AI (untested)
- XAI
- Anthropic
SECURE by design
How to install PARAMUS
(1) We offer a Docker-based distribution that can be deployed on-premises or run PARAMUS in your private cloud. Or: (2) For local installations: there is also Windows Installer that PARAMUS runs on your local system. Paramus has a Web/Browser based frontent.
Safety philosophy
Every user has his own PARAMUS. No shared environments – this is by design! The reason is, that PARAMUS is your personal assistant, really powerfull and so need to know the access to the systems on your behalf. Shared environments may be safe meanwhile … but still not enough for us to trust.
Both deployment options give a consistent user experience across any environment.
Newletter
Upcoming
Paramus INFINITE ∞
Backgound / Research as independent actor
- Offline processing
- Submit / Receive thoughts (part of chats)
- On top of the PARAMUS Core Supervisor
- Per user
LOCAL LLM OR LICENCE FOR LLM NEEDED
This new technology build on CORE and has the ability to act in the background without supervision. It is intended for time-consuming advanced reasoning. It has implemented asynchronous agent access (=parallel).
This makes sense for long-running jobs like with computational calculations or route prediction in combination with advanced LLM like openAI o3 – and is really expensive!
(First beta release in Q3/25)