Skip to main content

2 posts tagged with "testing"

View All Tags

How Property-Based Testing Caught Real Bugs in Monitor Logic

· 4 min read
Nick2bad4u
Project Maintainer

Property-based testing is not just a buzzword in this repo; it has caught real bugs in core monitor logic that traditional example-based tests missed.

This post walks through how we introduced fast-check, where we wired it in, and specific bug classes it exposed in Uptime Watcher.

Where property-based testing fits in the stack

Uptime Watcher uses fast-check across multiple layers:

  • Shared validation – to stress-test helpers like normalizeHistoryLimit and isValidIdentifier with randomized inputs.
  • Monitor operations – to generate randomized monitor definitions and ensure conversions between domain objects and database rows are round-trippable.
  • Stores and components – to drive Zustand stores and React components with arbitrary site names, URLs, and identifiers, verifying they stay in a valid state even under noisy input.

You can see this in commits like:

  • 0797d4d6f – introduces shared fast-check arbitraries and an assertProperty helper.
  • acb498188 – expands coverage for site-related components using arbitraries for names, URLs, and IDs.
  • c16d48e54 – tightens strict test directories and adds property-based tests to monitor operations and validation schemas.

Example: history normalization bugs

One of the early wins came from property-based tests around normalizeHistoryLimit, which is responsible for clamping and sanitizing history retention settings.

A simplified property looked like this (adapted from the real tests):

import fc from "fast-check";

import {
DEFAULT_HISTORY_LIMIT_RULES,
normalizeHistoryLimit,
} from "@shared/constants/history";

fc.assert(
fc.property(fc.integer({ min: -10_000, max: 10_000 }), (limit) => {
const normalized = normalizeHistoryLimit(limit, DEFAULT_HISTORY_LIMIT_RULES);

// Invariants
expect(normalized).toBeGreaterThanOrEqual(0);
expect(normalized).toBeLessThanOrEqual(
DEFAULT_HISTORY_LIMIT_RULES.maxLimit,
);
})
);

That single property immediately exposed edge cases where:

  • Negative values were not clamped correctly.
  • Large values near the upper bound pushed us past intended limits.

Once the invariants were clear, the implementation was fixed to always return a value within the allowed range, and the property became a regression test that runs in every CI pass.

Example: monitor → database mapping

The dynamic monitor schema in electron/services/database/utils/dynamicSchema.ts is powerful but easy to get wrong: it maps monitor objects to database rows and back using generated field definitions.

We use property-based tests to generate arbitrary monitor objects and exercise the mapping in both directions:

fc.assert(
fc.property(monitorArbitrary, (monitor) => {
const row = mapMonitorToRow(monitor);
const roundTripped = mapRowToMonitor(row);

// We don"t require deep equality because some fields are normalized,
// but core identity must hold.
expect(roundTripped.siteIdentifier).toBe(monitor.siteIdentifier);
expect(roundTripped.type).toBe(monitor.type);
expect(roundTripped.timeout).toBeGreaterThan(0);
})
);

This kind of test surfaced issues where:

  • Certain dynamic fields were not included in the generated column list.
  • lastChecked could end up as NaN in the database when passed a malformed value.
  • Boolean flags like enabled / monitoring were not consistently mapped to integer 0/1 values.

The fixes in dynamicSchema.ts and the associated helpers are now guarded by these properties.

Example: store invariants under random actions

Zustand stores such as the site and monitor stores are also exercised with randomized actions.

A typical pattern:

fc.assert(
fc.property(siteActionSequenceArb, (actions) => {
const store = createSiteStoreTestHarness();

for (const action of actions) {
action.apply(store);

// Invariants: no duplicate identifiers, selected site (if any) exists
const state = store.getState();
const identifiers = new Set(state.sites.map((s) => s.identifier));

expect(identifiers.size).toBe(state.sites.length);
if (state.selectedIdentifier) {
expect(identifiers.has(state.selectedIdentifier)).toBe(true);
}
}
})
);

This style of test flushed out subtle bugs where:

  • Removing a site did not always clear selectedIdentifier.
  • Certain actions could temporarily leave the store in a state where the selected site no longer existed.

Once fixed, the properties ensure those regressions never reappear.

How to run these tests locally

All the property-based tests live alongside the rest of the Vitest suite. Useful commands:

# Run all standard tests (frontend default)
npm test

# Focus on fuzzing/property-based tests across projects
npm run fuzz
npm run fuzz:electron
npm run fuzz:shared

# Run coverage across all test projects
npm run test:all:coverage

If you are adding new monitor types, complex validation rules, or store behaviour, consider:

  1. Defining a fast-check arbitrary for your domain object.
  2. Writing at least one property that encodes an invariant you care about.
  3. Running the fuzzing commands above to see what breaks.

Takeaways

  • Property-based testing caught bugs in history normalization, monitor mapping, and store invariants that would have required dozens of manual test cases to reproduce.
  • Once the infrastructure (arbitraries, helpers, harnesses) was in place, adding new properties became cheap and paid for itself quickly.
  • The tests now run on every CI pass, giving us confidence that changes to core monitor logic don"t silently corrupt data or state.

For a deeper dive into the overall testing approach, see the Testing Architecture & Strategy page and ADR_010_TESTING_STRATEGY.md in the Architecture docs.

Uptime Watcher 19.0: From Local Script to Deeply Tested Desktop App

· 8 min read
Nick2bad4u
Project Maintainer

Uptime Watcher started as a pragmatic way to watch a handful of URLs from a local machine. By the time we reached v19.0.0, it had grown into a heavily-tested, architecture-driven Electron app with a full documentation and tooling ecosystem around it.

This post walks through that journey using real commits from the git history, focusing on three themes:

  • How the architecture solidified around repositories, events, and IPC
  • How the testing strategy evolved into strict, property-based coverage
  • How documentation and tooling caught up with the rest of the stack

All examples and commit references are taken from the main branch as of 2025-11-25.

Early focus: correctness and documentation

One of the consistent patterns in this repo is that architecture and documentation are treated as first-class citizens.

A good example is the work captured in 4a0e1fdf1 (tagged as part of v19.0.0):

📝 [docs] Update TSDoc links for consistency

  • Correct links in TSDoc-Home.md to point to the appropriate files
  • Update TSDoc-Package-Tsdoc.md link to reflect the correct path
  • Modify TSDoc-Spec-Overview.md to ensure accurate package reference
  • Adjust comments in StatusSubscriptionIndicator.utils.ts for clarity
  • Refine useAddSiteForm.ts documentation by removing unnecessary link syntax
  • Enhance chartConfig.ts comments for better readability
  • Add Stylelint config schema reference in stylelint.config.mjs

This change is representative of the repo's style: fix docs and comments as soon as they become stale, keep configuration files typed and schema-backed, and treat tooling as part of the product.

Another example is cb0e9ed86:

📝 [docs] Update documentation frontmatter and summaries

  • Add frontmatter to multiple testing docs
  • Update summaries and metadata for clarity and consistency

That work laid the groundwork for the documentation and Docusaurus site that now power the public docs.

Hardening the tooling and CI pipeline

As the project grew, the CI and linting pipeline became a major focus. Changes like 32bba346a and 4c29fc698 show a pattern:

  • Every major tool (ESLint, Stylelint, Mega-Linter, Checkov, Grype, Secretlint, etc.) is wired with explicit schemas.

  • New configuration files get schema references immediately, so editors and CI can validate them.

  • Linting and scanning are integrated with npm scripts like:

    npm run lint:ci
    npm run lint:all:fix
    npm run docs:check
    npm run docs:validate-links

That level of rigor is not just aesthetic; it means refactors in eslint.config.mjs, stylelint.config.mjs, or docs configs are caught early.

Architecting for a long-term Electron app

Uptime Watcher is not a toy app. It has a service container, a repository layer, a TypedEventBus, and a typed IPC bridge between main and renderer. These ideas are formalized in the Architecture docs and in ADRs like:

  • ADR_001_REPOSITORY_PATTERN.md
  • ADR_002_EVENT_DRIVEN_ARCHITECTURE.md
  • ADR_004_FRONTEND_STATE_MANAGEMENT.md
  • ADR_005_IPC_COMMUNICATION_PROTOCOL.md

These aren't just documents; they are enforced in code by scripts like scripts/architecture-static-guards.mjs, which is wired into npm run lint:architecture and the lint:ci pipeline.

One of the commits that made the IPC story much harder to accidentally break is 542eb08db:

✨ [feat] Implement Docusaurus documentation backup workflow

  • Add GitHub Actions workflow for building and backing up Docusaurus documentation
  • Create backup-docusaurus.yml to automate documentation deployment
  • Update package.json with commands for subtree backup and force push
  • Add documentation style guide for Docusaurus setup

...

On the surface this looks like just a docs workflow improvement, but it cements the idea that docs and architecture are part of the app, not an afterthought.

The testing story: from unit tests to strict, property-based coverage

The most striking arc in the git log is the evolution of the test strategy.

Step 1: Strict tests and shared arbitraries

Commit 0797d4d6f introduced strict test directories and shared fast-check arbitraries:

✨ [feat] Introduce property-based testing for various components and utilities

  • Add property-based tests for normalizeHistoryLimit
  • Implement property-based tests for isNonEmptyString and isValidIdentifier
  • Create property-based tests for useAlertStore
  • Add property-based tests for dataValidation
  • Add README for strict tests directory
  • Introduce shared fast-check arbitraries and an assertProperty helper

This commit is where property-based testing stopped being an experiment and became part of the standard toolbox.

Step 2: Scaling property-based coverage

The work continued in acb498188:

🧪 [test] Enhance comprehensive test coverage for site-related components

  • Use arbitrary site names, URLs, and identifiers in tests
  • Refactor multiple component tests to generate dynamic props
  • Improve branch coverage for modal and settings flows

Here, the pattern is clear:

  • First, establish shared arbitraries and helpers.
  • Then, systematically roll them out across components, stores, and utilities.

Step 3: Tightening test configuration

Most recently, c16d48e54 (paired with c45c0afef and ef6e0dc1a for the version bump) pushed testing further:

✨ [feat] Enhance testing configurations and add property-based tests

  • Update tsconfig to include strict test directories for better coverage
  • Introduce fast-check for property-based testing in monitor operations and validation schemas
  • Add comprehensive property tests for monitor identifiers and status validation
  • Improve test coverage for monitor operations with randomized input testing
  • Extend Vitest configuration to include strict test directories

Combined with the testing ADR (ADR_010_TESTING_STRATEGY.md), Uptime Watcher now has:

  • Dedicated Vitest configs for frontend, Electron, shared, and Storybook
  • Strict test projects for cross-cutting concerns
  • Property-based tests for core monitor logic, validation, and state stores

When you run:

npm run test:all:coverage

you are not just running unit tests; you are exercising a multi-project test matrix with coverage gates and mutation testing support via Stryker.

UX and UI polish in lockstep with tests

The git history also shows that UI/UX improvements are usually paired with better tests.

For example, d6311ce2c added a new icon set and refined layout/animation behavior:

✨ [feat] Add new icon assets and improve UI styling

  • Introduced new icon files for various sizes
  • Updated CSS for layout responsiveness
  • Improved scrollbar styles and card hover effects
  • Enhanced modal animations and utility helpers for tests

Later, f14823e1e introduced density controls for the site table view:

✨ [feat] Enhance Site List and Card Components

  • Add density options ("comfortable", "compact", "cozy")
  • Wire density into the UI store
  • Expand SiteList and SiteListLayoutSelector tests

The pattern is the same throughout:

  • Add a UX feature.
  • Wire it into the Zustand stores.
  • Extend tests (often with property-based generators) to guarantee the new surface behaves under variation.

Docusaurus and docs as a first-class product

The documentation site under docs/docusaurus/ is not an afterthought; it has its own build, lint, and backup story.

The commit 542eb08db introduced a Docusaurus backup workflow, new scripts, and a documentation style guide. Later commits like f6e2cb2a4 and c8930adb9 keep that documentation in sync with the main codebase and tooling.

Today, you can work entirely from the root via:

npm run docs:build
npm run docusaurus:start
npm run docusaurus:broken-links

and know that:

  • TypeDoc is up to date.
  • ESLint Inspector is regenerated.
  • Architecture guides and ADRs match the actual code.

Lessons learned

The following themes stand out from this journey:

  1. Docs and tests are not optional — a significant portion of the most important commits are pure documentation or testing work; they are treated as features, not chores.

  2. Property-based testing pays off quickly — once fast-check was adopted and shared arbitraries were in place, it became much easier to extend coverage without duplicating effort.

  3. Tooling can be a competitive advantage — the investment in linting, schemas, and CI scripts means refactors are safer, documentation stays synchronized, and contributors get rapid feedback from the tooling alone.

  4. Electron apps benefit from real architecture — the combination of a service container, repository pattern, TypedEventBus, and strict IPC boundaries makes Uptime Watcher feel more like a production backend than a desktop toy.

Where we go next

Looking ahead, there are clear directions for Uptime Watcher:

  • Expanding the plugin surface for custom monitor types and alert rules
  • Deepening analytics and historical reporting
  • Introducing new visualizations and dashboards into the Docusaurus site
  • Continuing to push on testing rigor, especially around edge-case networking behavior

If you want to dive deeper into how everything fits together, the starting points are:

Uptime Watcher has come a long way from a simple checker script. The git history tells the story: deliberate architecture, aggressive testing, and a commitment to documentation all the way down.