HFT Async Architecture
OzaLog ships two independent async pipelines:
- Main logger pipeline — for application logs (
LOG.Info_Log,Error_Log,CustomName_Log, …) - Quote pipeline (v3.1+) — for high-frequency market tick/quote data (
LOG.Quote,LOG.QuoteTicker)
The two share no locks, no streams, and no dispatcher threads. They differ only in default tuning (queue size, file count, batch size).
1. Main logger pipeline
1.1 Write path
[caller threads]
│ enqueue (struct LogItem, zero-alloc)
▼
[ConcurrentQueue<LogItem>]
│ signal via SemaphoreSlim
▼
[single dispatcher thread]
│ drain batch → format → AppendLine
▼
[FileStreamPool: per-(level, name) persistent FileStream]
│ LRU bound = LogOptions.MaxOpenFileStreams (default 100)
│ day rollover handled inline
│ size-based splitting → {name}_part2_Log.{ext}
▼
{baseDir}/{LogPath}/{yyyyMMdd}/{TypeDirectories.*}/{name}_Log.{ext}
1.2 Caller-side cost (per LOG.Info_Log(...))
- 1× volatile read for cached timestamp (~5 ns; ~30 ns if
HighPrecisionTimestamp=true) - 1×
LogFormatter.EscapeMessagefor{}escaping (skipped if message has no braces) - 1× struct field copy to build
LogItem - 1×
ConcurrentQueue.Enqueue(CAS) - 1×
SemaphoreSlim.Release
No DateTime.Now, no string.Format, no heap allocation on the caller path. All formatting is done on the dispatcher thread.
1.3 Backpressure: drop-oldest
When the queue has more items than AsyncLogOptions.MaxQueueSize (default 10000):
- The oldest item is
TryDequeue-d and discarded - A drop counter is incremented atomically
LogOptions.OnDroppedcallback fires (if set)
This guarantees the queue never grows unbounded — OOM cannot happen from log spam, only the oldest lines are sacrificed.
1.4 Immediate flush
Error and Fatal levels (and any call with immediateFlush: true) trigger synchronous write + FileStream.Flush(flushToDisk: true) on the caller thread, in addition to the async enqueue. This guarantees crash logs reach disk before the process dies.
1.5 Disk flush timer
A Timer calls FileStreamPool.FlushAll() every DiskFlushIntervalMs (default 100 ms), which calls StreamWriter.Flush() on all open streams. The OS decides write-back timing (no forced fsync) — favors throughput over durability.
1.6 Shutdown safety
AppDomain.CurrentDomain.ProcessExit→ drain + flush + close all streamsAppDomain.CurrentDomain.UnhandledException→ sameLOG.Configurecan subscribeEnableGlobalExceptionCapture = truefor additional Fatal-level logging on unhandled exceptions and unobserved Task exceptions.
2. Quote pipeline (v3.1+)
The Quote pipeline runs in parallel with the main logger — same architecture, separate state.
2.1 Why a separate pipeline?
Quote / tick data has fundamentally different characteristics from application logs:
| Main logger | Quote pipeline | |
|---|---|---|
| Throughput | ~10–1000 entries/sec | ~10,000–1,000,000 entries/sec |
| Data type | Free-form string | Structured (Symbol, Bid, Ask, …) |
Default MaxQueueSize | 10,000 | 50,000 |
Default MaxOpenStreams | 100 | 500 |
Default MaxBatchSize | 100 | 500 |
| Severity levels | yes (Trace … Fatal) | no — all entries are equal |
| Immediate flush | Error/Fatal trigger flush | none — pure async batch |
Putting them on the same dispatcher would cause:
- Quote burst (~1M/sec) saturating the queue and dropping Error/Fatal application logs
- Quote dispatcher latency growing during application log immediate-flush
Separating them eliminates this contention entirely.
2.2 Write path
[caller threads — WebSocket consumer, REST poller, etc.]
│ validate (Symbol/Bucket non-empty, Extras key collisions, …)
│ enqueue (struct QuoteRecord, zero-alloc)
▼
[ConcurrentQueue<QuoteRecord>] ← QuoteOptions.MaxQueueSize
│ signal via SemaphoreSlim
▼
[independent dispatcher thread]
│ drain batch → QuoteFormatter.Format → AppendLine
▼
[QuoteFileStreamPool: per-(bucket, symbol) persistent FileStream]
│ LRU bound = QuoteOptions.MaxOpenStreams (default 500)
│ filename auto-sanitization for invalid file-system chars
▼
{baseDir}/{LogPath}/{yyyyMMdd}/{QuotePath}/{Bucket}_{Symbol}_Quote.{ext}
2.3 Synchronous validation on caller thread
LOG.Quote(...) validates the record before enqueueing:
Symbolis null or empty →ArgumentExceptionthrown on caller threadBucketis null or empty →ArgumentException- Both
ExtrasandExtrasJsonset →ArgumentException Extrascontains a reserved key (bid,ask,last, …) →ArgumentException
This lets callers wrap calls in try/catch and catch programmer errors immediately. Validation errors never silently disappear into the dispatcher.
2.4 Backpressure: drop-oldest with batched callback
Same drop-oldest strategy as the main logger, but the OnDropped callback signature differs:
// Main logger
public Action OnDropped { get; set; } // fires once per drop event
// Quote pipeline
public Action<long> OnDropped { get; set; } // batched: parameter = newly dropped since last callback
The Quote callback receives the delta (number of records dropped since the last callback fired), allowing efficient metric reporting without per-record callback overhead during heavy bursts.
2.5 Shutdown
Same ProcessExit hook as the main logger — the Quote pipeline flushes and closes all its streams independently. Both pipelines flush in parallel (no ordering coupling), so total shutdown time is bounded by the slower of the two.
3. Why no thread-pool dispatcher?
Both pipelines use a single dedicated Task.Run(...) dispatcher per pipeline — not a thread-pool worker. Reasons:
- Predictable latency: a dedicated thread is never preempted by user code.
- Lock-free FileStreamPool access: only one thread writes to the pool, so
FileStreamstate needs no locks during normal write path (locks are only used during shutdown / disk-flush timer / immediate-flush interleaving). - Cache locality: the dispatcher thread keeps its
FileStreamPoolslots,StreamWriterbuffers, and dictionary hot in CPU cache.
The cost: per-(level, name) (or per-(bucket, symbol)) write ordering is preserved, but cross-key write order may be slightly reordered (different keys may flush to disk in batched groups). For HFT tick reconstruction this is fine — timestamps in the records are the source of truth, not file ordering.
4. TimestampCache
A background Timer updates volatile long _currentTicks every 1 ms by calling DateTime.Now.Ticks. Callers do an atomic read of this value (~5 ns) instead of paying the DateTime.Now syscall cost (~80 ns on Windows due to GetSystemTimeAsFileTime + time-zone conversion) on every log call.
1 ms precision floor: if your TimeFormat uses precision finer than .fff (e.g. .ffffff for µs), the last digits will always be 0000 unless you opt into HighPrecisionTimestamp = true.
4.1 HighPrecisionTimestamp mode (v3.1+)
When enabled, the cache also stores Stopwatch.GetTimestamp() at each 1 ms update. On read, the caller computes:
actualTicks = cachedTicks + (Stopwatch.GetTimestamp() - cachedSwTimestamp) * (TimeSpan.TicksPerSecond / Stopwatch.Frequency)
This reconstructs sub-millisecond precision from the 1 ms cache without paying the DateTime.Now cost. Caller-side read goes from ~5 ns to ~30 ns. Use only when you need µs-level timestamps for latency analysis or tick-level time series.
5. Source of truth
OzaLog/OzaLog/Core/AsyncLogHandler.cs— main pipeline dispatcherOzaLog/OzaLog/Core/FileStreamPool.cs— main pipeline file streams + LRUOzaLog/OzaLog/Core/QuoteLogHandler.cs— Quote pipeline dispatcherOzaLog/OzaLog/Core/QuoteFileStreamPool.cs— Quote pipeline file streamsOzaLog/OzaLog/Core/TimestampCache.cs— cached timestamps