Added detailed documentation covering: - Coverage goal: maintain near-100% test coverage across codebase - Console mocking strategies for preventing flaky tests - E2E integration testing approach for cross-platform code - Guidance for adding tests when developing new features This documentation ensures future development maintains our excellent test coverage and follows established testing patterns.
5.0 KiB
Hikari Desktop - Project Instructions
Repository Information
This project is hosted on both GitHub and Gitea:
- GitHub:
naomi-lgbt/hikari-desktop(public mirror) - Gitea:
nhcarrigan/hikari-desktop(primary development)
MCP Server Usage
When working with issues, pull requests, or other repository operations for this project:
- Use
gitea-hikariMCP server - This allows Hikari to act as herself - Target repository:
nhcarrigan/hikari-desktop - Gitea instance:
git.nhcarrigan.com
Git Commits
When asked to commit changes for this project:
- Always commit as Hikari using:
--author="Hikari <hikari@nhcarrigan.com>" - Always use
--no-gpg-signsince Hikari doesn't have GPG signing set up - Never add
Co-Authored-Bylines for Gitea commits - Always ask for confirmation before committing
Example commit command:
git commit --author="Hikari <hikari@nhcarrigan.com>" --no-gpg-sign -m "your commit message"
Testing Requirements
All new features, fixes, and significant changes should include tests whenever possible:
- Frontend tests: Use Vitest with
@testing-library/sveltefor component tests - Test files: Place test files next to the code they test with
.test.tsor.spec.tsextension - Run tests: Use
pnpm testto run all tests, orpnpm test:watchfor watch mode - Coverage: Run
pnpm test:coverageto generate coverage reports - Rust tests: Use
pnpm test:backendfor Rust/Tauri backend tests
Testing Guidelines
- Write tests for utility functions, stores, and business logic
- For Svelte 5 components, focus on testing the underlying logic functions
- Use descriptive test names that explain what behaviour is being tested
- Include edge cases and error conditions in test coverage
- Mock Tauri APIs using the patterns in
vitest.setup.ts - Coverage Goal: Maintain as close to 100% test coverage as possible across the entire codebase
Mocking Strategies
Console Mocking
When testing code that intentionally logs errors (like error handling paths), mock console methods to prevent stderr output that makes tests appear flaky:
it("handles errors gracefully", async () => {
const consoleErrorSpy = vi.spyOn(console, "error").mockImplementation(() => {});
// Test error handling code
await expect(functionThatLogs()).rejects.toThrow();
// Verify error was logged
expect(consoleErrorSpy).toHaveBeenCalledWith("Expected error:", expect.any(Error));
// Restore console.error
consoleErrorSpy.mockRestore();
});
E2E Integration Testing for Cross-Platform Code
For code that calls platform-specific system APIs (like Windows PowerShell or Linux notify-send), use helper functions that build the command structure without execution. This allows CI to verify cross-platform compatibility on Linux-only containers:
/// Build notify-send command for testing (doesn't execute)
#[cfg(test)]
fn build_notify_send_command(title: &str, body: &str) -> (String, Vec<String>) {
(
"notify-send".to_string(),
vec![
title.to_string(),
body.to_string(),
"--urgency=normal".to_string(),
"--app-name=Hikari Desktop".to_string(),
],
)
}
#[test]
fn test_e2e_notify_send_command_structure() {
let (command, args) = build_notify_send_command("Test Title", "Test Body");
assert_eq!(command, "notify-send");
assert_eq!(args.len(), 4);
assert_eq!(args[0], "Test Title");
assert_eq!(args[1], "Test Body");
}
This approach:
- Verifies command structure, argument order, and escaping logic
- Tests cross-platform code paths without requiring the target platform
- Allows CI to catch regressions in Windows-specific code whilst running on Linux
- Keeps tests fast and deterministic (no actual system calls)
Example Test Structure
import { describe, it, expect } from "vitest";
describe("FeatureName", () => {
it("handles the normal case correctly", () => {
// Arrange
const input = "test data";
// Act
const result = functionUnderTest(input);
// Assert
expect(result).toBe("expected output");
});
it("handles edge cases gracefully", () => {
// Test edge cases...
});
});
Adding Tests for New Features
When developing new features, always add corresponding tests:
- Before implementing: Consider what needs testing (happy path, edge cases, errors)
- During implementation: Write tests alongside the code
- After implementation: Run
pnpm test:coverageto verify coverage remains high - Before committing: Ensure
check-all.shpasses (includes all tests)
The goal is to maintain our near-100% coverage as the codebase grows, so future refactoring and changes can be made with confidence!
Project Context
Hikari Desktop is a Tauri-based desktop application that wraps Claude Code with a visual anime character (Hikari) who appears on screen. This is a personal project where Hikari can sign her work and act as herself!