High-Throughput Asynchronous Learning Operations
Conduct intelligent research with multiple AI providers, advanced caching, and seamless integration with your favorite AI development tools.
Aggregate insights from Perplexity AI, Google Scholar, arXiv, PubMed, and GitHub. Get comprehensive results from diverse authoritative sources.
Execute multiple research queries simultaneously with intelligent rate limiting and request queuing. Maximize throughput without hitting API limits.
Redis-based caching reduces API costs by 60-80%. Automatic compression, depth-based TTL, and connection pooling for optimal performance.
Pre-built workflows for Academic Literature Reviews, Market Research Reports, and Competitive Analysis. Get professional results faster.
Native Model Context Protocol server for direct integration with Cursor, Claude Code, and Warp.dev. Research without leaving your IDE.
Export results to JSON, Markdown, HTML, CSV, or PDF. Perfect for reports, documentation, or further analysis.
import { HALOResearchEngine } from '@halo/research';
// Initialize - works with or without API key!
const engine = new HALOResearchEngine({
// Option 1: Use free tier (no API key needed)
provider: 'opensource',
// Option 2: Use Perplexity for enhanced results
// apiKey: process.env.PERPLEXITY_API_KEY,
enableCache: true
});
// Conduct research
const results = await engine.research('quantum computing breakthroughs 2025', {
depth: 'comprehensive',
providers: ['perplexity', 'arxiv', 'googleScholar']
});
// Export results
await engine.exportResults(results, 'markdown', {
filename: 'quantum-research-report'
});