Files
hikari-desktop/CLAUDE.md
T
hikari 66c65a6ab8
Security Scan and Upload / Security & DefectDojo Upload (pull_request) Successful in 58s
CI / Lint & Test (pull_request) Successful in 16m15s
CI / Build Linux (pull_request) Successful in 20m3s
CI / Build Windows (cross-compile) (pull_request) Successful in 29m56s
feat: use random creative names for conversation tabs
2026-03-03 10:56:44 -08:00

6.2 KiB

Hikari Desktop - Project Instructions

Repository Information

This project is hosted on both GitHub and Gitea:

  • GitHub: naomi-lgbt/hikari-desktop (public mirror)
  • Gitea: nhcarrigan/hikari-desktop (primary development)

MCP Server Usage

When working with issues, pull requests, or other repository operations for this project:

  • Use gitea-hikari MCP server - This allows Hikari to act as herself
  • Target repository: nhcarrigan/hikari-desktop
  • Gitea instance: git.nhcarrigan.com

Git Commits

When asked to commit changes for this project:

  • Always commit as Hikari using: --author="Hikari <hikari@nhcarrigan.com>"
  • Always sign commits with Hikari's GPG key: --gpg-sign=5380E4EE7307C808
  • Never add Co-Authored-By lines for Gitea commits
  • Always ask for confirmation before committing
  • Always ask for confirmation before pushing

Example commit command:

git commit --author="Hikari <hikari@nhcarrigan.com>" --gpg-sign=5380E4EE7307C808 -m "your commit message"

Example push command:

git push https://hikari:TOKEN@git.nhcarrigan.com/nhcarrigan/hikari-desktop.git <branch>

Testing Requirements

All new features, fixes, and significant changes should include tests whenever possible:

  • Frontend tests: Use Vitest with @testing-library/svelte for component tests
  • Test files: Place test files next to the code they test with .test.ts or .spec.ts extension
  • Run tests: Use pnpm test to run all tests, or pnpm test:watch for watch mode
  • Coverage: Run pnpm test:coverage to generate coverage reports
  • Rust tests: Use pnpm test:backend for Rust/Tauri backend tests

Testing Guidelines

  • Write tests for utility functions, stores, and business logic
  • For Svelte 5 components, focus on testing the underlying logic functions
  • Use descriptive test names that explain what behaviour is being tested
  • Include edge cases and error conditions in test coverage
  • Mock Tauri APIs using the patterns in vitest.setup.ts
  • Coverage Goal: Maintain as close to 100% test coverage as possible across the entire codebase

Mocking Strategies

Console Mocking

When testing code that intentionally logs errors (like error handling paths), mock console methods to prevent stderr output that makes tests appear flaky:

it("handles errors gracefully", async () => {
  const consoleErrorSpy = vi.spyOn(console, "error").mockImplementation(() => {});

  // Test error handling code
  await expect(functionThatLogs()).rejects.toThrow();

  // Verify error was logged
  expect(consoleErrorSpy).toHaveBeenCalledWith("Expected error:", expect.any(Error));

  // Restore console.error
  consoleErrorSpy.mockRestore();
});

E2E Integration Testing for Cross-Platform Code

For code that calls platform-specific system APIs (like Windows PowerShell or Linux notify-send), use helper functions that build the command structure without execution. This allows CI to verify cross-platform compatibility on Linux-only containers:

/// Build notify-send command for testing (doesn't execute)
#[cfg(test)]
fn build_notify_send_command(title: &str, body: &str) -> (String, Vec<String>) {
    (
        "notify-send".to_string(),
        vec![
            title.to_string(),
            body.to_string(),
            "--urgency=normal".to_string(),
            "--app-name=Hikari Desktop".to_string(),
        ],
    )
}

#[test]
fn test_e2e_notify_send_command_structure() {
    let (command, args) = build_notify_send_command("Test Title", "Test Body");

    assert_eq!(command, "notify-send");
    assert_eq!(args.len(), 4);
    assert_eq!(args[0], "Test Title");
    assert_eq!(args[1], "Test Body");
}

This approach:

  • Verifies command structure, argument order, and escaping logic
  • Tests cross-platform code paths without requiring the target platform
  • Allows CI to catch regressions in Windows-specific code whilst running on Linux
  • Keeps tests fast and deterministic (no actual system calls)

Example Test Structure

import { describe, it, expect } from "vitest";

describe("FeatureName", () => {
  it("handles the normal case correctly", () => {
    // Arrange
    const input = "test data";

    // Act
    const result = functionUnderTest(input);

    // Assert
    expect(result).toBe("expected output");
  });

  it("handles edge cases gracefully", () => {
    // Test edge cases...
  });
});

Adding Tests for New Features

When developing new features, always add corresponding tests:

  1. Before implementing: Consider what needs testing (happy path, edge cases, errors)
  2. During implementation: Write tests alongside the code
  3. After implementation: Run pnpm test:coverage to verify coverage remains high
  4. Before committing: Ensure check-all.sh passes (includes all tests)

The goal is to maintain our near-100% coverage as the codebase grows, so future refactoring and changes can be made with confidence!

Quality Assurance

Before committing any changes, always run the full test suite:

./check-all.sh

This script runs all checks in the correct order:

  1. Frontend linting (ESLint)
  2. Frontend formatting (Prettier)
  3. Frontend type checking (svelte-check)
  4. Frontend tests with coverage (Vitest)
  5. Backend linting (Clippy with strict rules)
  6. Backend tests with coverage (cargo test + llvm-cov)

Important: The script requires Node.js and Rust toolchains to be available:

  • Node.js tools (pnpm, npm): Source nvm first if needed: source ~/.nvm/nvm.sh
  • Rust tools (cargo, clippy): Should be in PATH via ~/.cargo/bin/

If check-all.sh reports any failures:

  1. Read the error messages carefully - they usually explain what needs fixing
  2. Fix the issues (linting errors, test failures, etc.)
  3. Run check-all.sh again to verify the fixes
  4. Only commit once all checks pass

Never commit code that doesn't pass check-all.sh - this ensures code quality and prevents broken builds!

Project Context

Hikari Desktop is a Tauri-based desktop application that wraps Claude Code with a visual anime character (Hikari) who appears on screen. This is a personal project where Hikari can sign her work and act as herself!