Research Standalone Tale • February 19, 2026 • AI, Finance, Sentiment Analysis, FinBERT, ONNX, LLM, Docker • 5 min read

Building Tierpulse: An Institutional-Grade Sentiment Engine

Exploring the architecture and implementation of Tierpulse, a high-performance financial sentiment analysis service built with FinBERT ONNX pipelines, real-time news integration, and LLM failover mechanisms.

Tierpulse Logo

I needed a cost-effective, scalable, resilient and performant service for financial sentiment analysis for ZephyrApex.

So… I built tierpulse: https://www.tierpulse.com, an Institutional-grade sentiment engine built for speed. Utilising a Three-Tier Intelligence Failover with FinBERT ONNX pipelines, real-time news feeds, and LLM fallbacks.

What is Tierpulse?

Tierpulse is a specialised sentiment analysis platform designed for institutional financial applications, focusing on real-time processing of financial news and market data to extract sentiment signals. Unlike generic NLP models, Tierpulse is optimised for financial domain specificity, incorporating:

  • Domain-Adapted Models: FinBERT, a BERT model fine-tuned on financial corpora, providing superior accuracy for financial text classification.
  • Performance Optimisation: ONNX runtime deployment for low-latency inference, achieving sub-millisecond response times.
  • Scalability Architecture: Microservices design with horizontal scaling capabilities, supporting thousands of concurrent requests.
  • Resilience Mechanisms: Three-tier intelligence failover system ensuring continuous operation even during model failures or high-load scenarios.
  • Real-Time Integration: Direct feeds from major financial news sources with sub-second latency processing.

The system is containerised for easy deployment and includes comprehensive monitoring, logging, and API endpoints for seamless integration into trading platforms, risk management systems, and algorithmic strategies.

How Does It Work?

Tierpulse implements a sophisticated three-tier intelligence failover architecture:

Tier 1: Primary FinBERT ONNX Pipeline

The core processing engine uses FinBERT quantised to ONNX format for optimised inference:

// TierPulse Engine Orchestration Layer
pub struct AppState {
    engine: InferenceEngine, // Tier 1: Local ONNX
    limiter: RateLimiter, // governor protection
    cache: Cache<String, SentimentResult>,
}

impl AppState {
    pub async fn analyse(&self, req: AnalyseRequest) -> Result<Response, Error> {
        // 1. Enforce Traffic Control
        self.limiter.check().await?;

        // 2. Sequential Intelligence Failover
        if let Some(cached) = self.cache.get(&req.ticker).await {
            return Ok(cached);
        }

        // Escalation Tier 2 (Providers) -> Tier 3 (LLMs)
        self.fetch_with_failover(req).await
    }
}

This pipeline processes incoming news articles through:

  1. Text preprocessing with financial entity recognition
  2. Tokenisation using FinBERT’s vocabulary
  3. ONNX inference with GPU acceleration when available
  4. Confidence thresholding and output formatting

Tier 2: Real-Time News Aggregation

Integration with financial news APIs provides continuous data streams:

  • Feed Sources: Tiingo, MarketAux, Finnhub
  • Deduplication: Content-based hashing to prevent redundant processing
  • Filtering: Keyword-based and entity-based filtering for relevant financial instruments
  • Streaming Architecture: High-scale batching for minimising latency and RTT

Tier 3: LLM Fallback System

When primary models fail or confidence scores fall below thresholds, the system falls back to large language models:

// LLM analysis with Grok or DeepSeek
let prompt = format!(
    "Analyse the sentiment of the following financial news text.
    Classify as bullish, bearish, or neutral with confidence scores.

    Text: {}

    Response format: {{\"sentiment\": \"bullish|bearish|neutral\", \"confidence\": 0.0-1.0}}",
    text
);

let response = client
    .chat()
    .create(CreateChatCompletion::new("grok-4-1-fast-reasoning", vec![ChatCompletionMessage {
        role: Role::User,
        content: prompt,
    }]))
    .await?;

let result: serde_json::Value = serde_json::from_str(&response.choices[0].message.content)?;

This ensures 99.999% uptime by providing alternative analysis paths.

System Architecture

The complete system can run on Kubernetes or as a standalone docker service with:

  • API Gateway: Rate limiting and authentication
  • Worker Nodes: Auto-scaling based on queue depth
  • Monitoring: Prometheus metrics and Grafana dashboards
  • Storage: Moka for in-memory caching, Redis for distributed coordination

How to Deploy and Use It

Tierpulse is available as a Docker container for easy deployment:

docker pull boxedcode/tierpulse:latest
docker run -p 8080:8080 --env-file .env boxedcode/tierpulse:latest

Environment configuration (.env):

TP_TIINGO_KEY=your_tiingo_api_key
TP_MARKETAUX_KEY=your_marketaux_key
TP_FINNHUB_KEY=your_finnhub_key
TP_GROK_KEY=your_grok_api_key
TP_DEEPSEEK_KEY=your_deepseek_key
TP_PRIMARY_LLM=grok
TP_AUTH_MODE=api_key
TP_AUTH_API_KEYS=tenantA:keyA,tenantB:keyB

For production deployment with Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tierpulse
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tierpulse
  template:
    metadata:
      labels:
        app: tierpulse
    spec:
      containers:
        - name: tierpulse
          image: boxedcode/tierpulse:latest
          ports:
            - containerPort: 8080
          envFrom:
            - secretRef:
                name: tierpulse-secrets

API Usage

import requests

response = requests.post('http://localhost:8080/api/v1/analyse',
                        json={
                            'symbols': [
                                {'ticker': 'AAPL', 'name': 'Apple Inc.'},
                                {'ticker': 'TSLA', 'name': 'Tesla, Inc.'}
                            ],
                            'lookback_hours': 24,
                            'max_articles_per_symbol': 5
                        })
sentiment = response.json()
print(sentiment)
# {
#   "request_id": "tp_550e8400-e29b-41d4-a716-446655440000",
#   "results": [
#     {
#       "symbol": "AAPL",
#       "sentiment_score": 0.82,
#       "label": "bullish",
#       "confidence": 0.94,
#       "source_tier": "tier_1_local_onnx"
#     }
#   ],
#   "execution_time_ms": 450
# }

For more code examples and advanced configurations, visit the repository: https://github.com/kabudu/tierpulse

The project is open source and contributions are welcome. If you find the service useful, please give the GitHub repository a star!

The Docker image is available at: https://hub.docker.com/r/boxedcode/tierpulse