A technical resource by Fault Ledger — Dual-Mode Bearing Sensors — Predictive Maintenance + Forensic Evidence

Predictive vs Forensic Bearing Monitoring: Different Goals, Different Architectures

The phrase “bearing condition monitoring” covers two fundamentally different engineering objectives. Predictive monitoring aims to detect bearing deterioration early enough to schedule a repair before failure. Forensic monitoring aims to capture detailed evidence of how and why a bearing failed, producing a technical record that supports root cause analysis, warranty claims, supplier accountability, and reliability improvement programs. These two objectives overlap in their use of vibration sensors, but they diverge in architecture, data strategy, and what the data is ultimately used for. This article examines both approaches, where they complement each other, and where the differences in design philosophy lead to genuinely different system architectures.

Predictive Bearing Monitoring

Objective

The goal of predictive monitoring is actionable early warning: detect that a bearing is deteriorating, estimate remaining useful life (if possible), and alert maintenance personnel in time to schedule a repair during a planned outage. The value proposition is avoiding unplanned downtime. A bearing replacement that costs $500 in parts and labor during a scheduled shutdown might cost $50,000 or more in lost production if it triggers an unplanned outage.

Architecture

Predictive systems are optimized for detection sensitivity and alarm reliability. The typical architecture includes:

  • Periodic measurement: Vibration data is acquired at regular intervals — typically every 15 minutes to every 4 hours, depending on machine criticality and bearing speed. Between measurements, the sensor sleeps to conserve power.
  • On-board processing: The sensor computes summary metrics on-board: overall RMS velocity, peak acceleration, crest factor, kurtosis, and sometimes an envelope spectrum. Only these compressed results are transmitted wirelessly, reducing data volume by 100–1,000× compared to raw waveform transmission.
  • Threshold-based alarms: The cloud platform compares summary metrics against pre-configured thresholds (often based on ISO 10816/20816 severity levels or machine-specific baselines). When a metric exceeds the threshold, an alarm is raised.
  • Trend analysis: Historical metric values are trended over time. A rising trend in envelope spectrum amplitude at BPFO, even if still below the alarm threshold, may trigger an advisory alert.
  • Machine learning models: Some advanced systems use machine learning to model normal bearing behavior and detect anomalous metric patterns that may not trigger simple threshold alarms.

Data Strategy

Predictive systems favor data reduction. The raw time-domain waveform — which might be 50 kB per measurement — is processed into a handful of scalar metrics and a compressed spectrum (perhaps 500 bytes). This enables long battery life, low wireless bandwidth consumption, and minimal cloud storage cost. The trade-off: the raw waveform is discarded after on-board processing. If a failure occurs and an engineer wants to examine the waveform characteristics leading up to it, the data does not exist.

Strengths

  • Cost-effective at scale — low data volumes enable large sensor populations on shared wireless infrastructure
  • Long battery life (3–5+ years in many implementations)
  • Proven detection capability for medium- and late-stage bearing defects
  • Well-suited for fleet monitoring across hundreds or thousands of machines

Limitations

  • On-board processing discards information that may be needed for root cause analysis
  • Threshold-based alarms can produce false positives (environmental changes, load variations) and false negatives (slowly developing faults that stay below threshold)
  • Post-failure analysis is limited to the summary metrics that were computed and stored — the detailed vibration character is not available
  • Cannot answer “why did this bearing fail?” with the same confidence as a system that preserved the full waveform record

Forensic Bearing Monitoring

Objective

Forensic monitoring aims to build a complete, time-stamped evidence record of bearing condition throughout its operational life — or at least throughout the period of deterioration. The goal is not just to detect that a bearing is failing, but to capture sufficient technical evidence to determine the failure mode, identify the root cause, assign responsibility (was it a manufacturing defect, an installation error, a lubrication failure, an overload event?), and support continuous improvement of bearing selection, installation, and maintenance practices.

This objective is particularly important in industries where bearing failures have safety, regulatory, or contractual consequences: rail transport (axlebox bearings), wind energy (main shaft and gearbox bearings), marine propulsion, mining, and critical-process manufacturing.

Architecture

Forensic systems are optimized for data completeness and evidentiary integrity. The architecture differs from predictive systems in several key ways:

  • High-frequency raw waveform capture: The system captures and stores the complete time-domain waveform at high sampling rates (25.6 kS/s or higher), not just summary metrics. Every impulse, every transient, every modulation pattern is preserved.
  • Time-stamped data chain: Each measurement is immutably time-stamped, creating a chronological record that can be audited. This is essential for warranty claims and regulatory submissions where the integrity of the evidence chain matters.
  • Longer record lengths: Forensic analysis may require waveforms of 2–10 seconds or more to capture sufficient statistical cycles of low-frequency defect patterns and modulation effects.
  • Metadata capture: Operating conditions at the time of each measurement — speed, load, temperature, process state — are recorded alongside the vibration data. This context is essential for interpreting the vibration signatures correctly. A vibration peak that appears at full load but disappears at half load tells a different story than one that is load-independent.
  • Secure data storage: The captured waveforms and metadata are stored in a tamper-evident format, often with cryptographic hashing, to ensure that the evidence has not been altered after collection. This is the “ledger” concept — an immutable record of bearing condition over time.

Fault Ledger is built around this forensic architecture. The system captures high-resolution vibration waveforms at every measurement interval, stores them with full operational context and time-stamped integrity, and makes the entire evidence chain available for post-event analysis. The name reflects the core concept: a fault ledger — an auditable record of bearing condition that serves as technical evidence for failure investigation.

Data Strategy

Forensic systems favor data preservation over data reduction. The raw waveform is the primary asset — it can be reprocessed with different algorithms, different bandpass settings, different envelope parameters at any time in the future. Summary metrics are derived from the waveform and used for alarming and trending, but they supplement the raw data rather than replacing it.

This data-first approach has storage and bandwidth implications. A 1-second waveform at 25.6 kS/s generates approximately 50 kB per measurement. At 96 measurements per day, that is 4.8 MB/day per sensor — manageable for modern cloud storage but significant for wireless bandwidth, especially over LoRaWAN or other LPWAN protocols. Forensic systems may use higher-bandwidth wireless links (Wi-Fi, cellular) or edge storage with periodic batch upload to manage this data volume.

Strengths

  • Supports detailed root cause analysis after a failure event
  • Provides evidence for warranty claims, regulatory compliance, and supplier accountability
  • Raw waveform data can be reprocessed with improved algorithms as diagnostic techniques advance
  • Enables distinction between failure modes (contamination vs. fatigue vs. misalignment) that summary metrics cannot differentiate
  • Creates institutional knowledge: the failure evidence record feeds back into bearing selection, installation procedures, and maintenance practices

Limitations

  • Higher data volumes require more wireless bandwidth, storage, and processing resources
  • Higher per-sensor cost (more capable hardware, more data infrastructure)
  • May require higher power consumption, limiting battery life or requiring wired power
  • The value of forensic data is only realized when someone analyzes it — the organization needs the expertise and processes to use the evidence

Where the Two Approaches Overlap

Predictive and forensic monitoring are not mutually exclusive. In fact, a forensic system inherently provides predictive capability — the same waveform data that serves as failure evidence also supports trend analysis, threshold alarming, and envelope-based early detection. The difference is that the forensic system retains the raw data while the pure predictive system discards it.

The converse is not true: a predictive system that discards raw waveforms cannot retroactively perform forensic analysis. Once the waveform is reduced to a scalar metric, the information needed for failure mode identification is gone.

Some organizations implement a hybrid approach: predictive monitoring on most machines (cost-effective fleet coverage) with forensic monitoring on critical bearings where failure consequences are severe or where root cause evidence is needed for contractual or regulatory reasons.

Choosing Between Predictive and Forensic Architectures

The right approach depends on what question you need to answer:

If the primary question is “Is this bearing failing?”

A predictive system is sufficient. On-board-processed metrics with threshold alarms and trend analysis will detect bearing deterioration with adequate lead time for maintenance planning on most industrial machinery. This is the right choice for fleet monitoring of non-critical equipment where the goal is scheduling replacements efficiently.

If the primary question is “Why did this bearing fail?”

A forensic system is necessary. Root cause analysis requires examining the vibration characteristics in detail — the statistical distribution of impulses, the modulation patterns, the frequency evolution over time. These details exist in the raw waveform, not in summary metrics. This is the right choice for critical machinery where recurring failures indicate a systemic problem, or where failure evidence has contractual, warranty, or regulatory significance.

If the primary question is “How can we prevent this failure from recurring?”

Forensic evidence feeds reliability engineering. Without evidence of how the bearing failed, the reliability engineer is guessing at root cause and corrective action. With a detailed vibration record showing the progression from first detectable anomaly to failure, the engineer can determine whether the root cause was contamination (improve sealing), misalignment (improve installation procedures), overloading (redesign the application), lubrication (change relubrication interval), or manufacturing defect (engage the bearing supplier). Fault Ledger was designed specifically to serve this reliability engineering feedback loop, providing the evidentiary record that turns bearing failures from recurring frustrations into opportunities for systematic improvement.

Architectural Differences in Practice

The following table summarizes how the two philosophies lead to different design decisions across the system:

  • Sampling rate: Predictive: 2,560–10,240 S/s (sufficient for basic spectral analysis). Forensic: 25,600+ S/s (supports envelope analysis and waveform-level diagnostics).
  • Data transmitted: Predictive: summary metrics (tens to hundreds of bytes). Forensic: raw waveforms (tens to hundreds of kilobytes).
  • Storage per sensor-year: Predictive: 1–50 MB. Forensic: 500 MB–5 GB.
  • Battery life (typical): Predictive: 3–7 years. Forensic: 1–3 years (or wired power).
  • Post-failure analysis capability: Predictive: limited to stored metrics and trends. Forensic: full waveform reprocessing and failure mode classification.
  • Best suited for: Predictive: fleet monitoring, non-critical equipment, maintenance scheduling. Forensic: critical machinery, root cause investigation, warranty evidence, regulatory compliance.

The Trend Toward Convergence

As edge computing capabilities improve and wireless bandwidth increases, the distinction between predictive and forensic architectures is gradually narrowing. Modern IoT sensor nodes with sufficient processing power can perform on-board envelope analysis (providing predictive detection capability) while simultaneously storing the raw waveform to local flash memory for periodic upload (preserving forensic evidence). Advances in low-power wide-area network (LPWAN) protocols and the increasing availability of industrial Wi-Fi and 5G reduce the bandwidth constraint that historically forced the choice between data reduction and data preservation.

The economic logic is also shifting. Cloud storage costs continue to decline, making the storage of raw waveforms increasingly affordable. And the cost of a single undiagnosed recurring bearing failure — repeated replacements, production losses, potential safety incidents — often exceeds the incremental cost of forensic data capture across an entire fleet of sensors.

Conclusion

Predictive and forensic bearing monitoring are not competing philosophies — they are different tools for different jobs. Predictive monitoring answers “when will this bearing need attention?” Forensic monitoring answers “why did this bearing fail, and how do we prevent it from happening again?” The choice between them — or the decision to implement both — depends on the consequences of failure, the need for root cause evidence, and the organization’s commitment to using failure data for continuous reliability improvement. As sensor and data infrastructure costs continue to decline, the case for capturing and preserving the full vibration record grows stronger. The most effective bearing monitoring programs will be those that detect faults early and explain them thoroughly.

IoT Bearings — Technical Resources for Bearing Condition Monitoring