Quickstart ========== Library usage (recommended) --------------------------- The recommended way to use ProbeLLM is as a Python library: .. code-block:: python from probellm import VulnerabilityPipelineAsync pipeline = VulnerabilityPipelineAsync( model_name="gpt-5.2", test_model="gpt-4o-mini", judge_model="gpt-5.2", max_depth=3, num_simulations=100, num_samples=5, ) pipeline.add_datasets_batch(["mbpp", "mmlu"]) pipeline.run() CLI usage --------- You can also use the CLI module wrappers: .. code-block:: bash python -m probellm.search Outputs ------- Results are written under ``results/``: .. code-block:: text results/run__sim_samples/ ├── metadata.json ├── / │ ├── results_*.json │ └── checkpoints/ └── enhanced_analysis/