generated from nhcarrigan/template
9a8816f6a0
This commit fixes all 4 reported agent tracking bugs and adds comprehensive test coverage: **Bug Fixes:** 1. Agents stuck in "running" state after completion - Added SubagentStop hook parsing in wsl_bridge.rs - Emits claude:agent-end events when SubagentStop hooks detected - Includes 8 new Rust tests for hook parsing 2. Agents persisting after disconnect - Added clearConversation() call on disconnect in tauri.ts - Prevents agents from persisting across sessions 3. "Kill All" button doing nothing - Added markAllErrored() call in AgentMonitorPanel after interrupt - Updates UI state immediately after killing process 4. Badge persisting after closing tab - Added clearConversation() call in conversations.ts deleteConversation() - Properly cleans up agent tracking when tab is closed **Test Coverage:** - Added comprehensive agents.test.ts with 24 new tests - Tests all store methods: addAgent, updateAgentId, endAgent, markAllErrored, clearCompleted, clearConversation, runningAgentCount - Added 8 Rust tests for SubagentStart/Stop hook parsing - All 387 frontend tests pass - All 426 backend tests pass **Documentation:** - Updated CLAUDE.md with comprehensive testing guidelines - Documents coverage goals, console mocking strategies, and E2E integration testing patterns ✨ This fix was implemented with help from Hikari~ 🌸
147 lines
5.0 KiB
Markdown
147 lines
5.0 KiB
Markdown
# Hikari Desktop - Project Instructions
|
|
|
|
## Repository Information
|
|
|
|
This project is hosted on both GitHub and Gitea:
|
|
|
|
- **GitHub**: `naomi-lgbt/hikari-desktop` (public mirror)
|
|
- **Gitea**: `nhcarrigan/hikari-desktop` (primary development)
|
|
|
|
## MCP Server Usage
|
|
|
|
When working with issues, pull requests, or other repository operations for this project:
|
|
|
|
- **Use `gitea-hikari` MCP server** - This allows Hikari to act as herself
|
|
- **Target repository**: `nhcarrigan/hikari-desktop`
|
|
- **Gitea instance**: `git.nhcarrigan.com`
|
|
|
|
## Git Commits
|
|
|
|
When asked to commit changes for this project:
|
|
|
|
- **Always commit as Hikari** using: `--author="Hikari <hikari@nhcarrigan.com>"`
|
|
- **Always use `--no-gpg-sign`** since Hikari doesn't have GPG signing set up
|
|
- **Never add `Co-Authored-By` lines** for Gitea commits
|
|
- **Always ask for confirmation** before committing
|
|
|
|
Example commit command:
|
|
|
|
```bash
|
|
git commit --author="Hikari <hikari@nhcarrigan.com>" --no-gpg-sign -m "your commit message"
|
|
```
|
|
|
|
## Testing Requirements
|
|
|
|
All new features, fixes, and significant changes should include tests whenever possible:
|
|
|
|
- **Frontend tests**: Use Vitest with `@testing-library/svelte` for component tests
|
|
- **Test files**: Place test files next to the code they test with `.test.ts` or `.spec.ts` extension
|
|
- **Run tests**: Use `pnpm test` to run all tests, or `pnpm test:watch` for watch mode
|
|
- **Coverage**: Run `pnpm test:coverage` to generate coverage reports
|
|
- **Rust tests**: Use `pnpm test:backend` for Rust/Tauri backend tests
|
|
|
|
### Testing Guidelines
|
|
|
|
- Write tests for utility functions, stores, and business logic
|
|
- For Svelte 5 components, focus on testing the underlying logic functions
|
|
- Use descriptive test names that explain what behaviour is being tested
|
|
- Include edge cases and error conditions in test coverage
|
|
- Mock Tauri APIs using the patterns in `vitest.setup.ts`
|
|
- **Coverage Goal**: Maintain as close to 100% test coverage as possible across the entire codebase
|
|
|
|
### Mocking Strategies
|
|
|
|
#### Console Mocking
|
|
|
|
When testing code that intentionally logs errors (like error handling paths), mock console methods to prevent stderr output that makes tests appear flaky:
|
|
|
|
```typescript
|
|
it("handles errors gracefully", async () => {
|
|
const consoleErrorSpy = vi.spyOn(console, "error").mockImplementation(() => {});
|
|
|
|
// Test error handling code
|
|
await expect(functionThatLogs()).rejects.toThrow();
|
|
|
|
// Verify error was logged
|
|
expect(consoleErrorSpy).toHaveBeenCalledWith("Expected error:", expect.any(Error));
|
|
|
|
// Restore console.error
|
|
consoleErrorSpy.mockRestore();
|
|
});
|
|
```
|
|
|
|
#### E2E Integration Testing for Cross-Platform Code
|
|
|
|
For code that calls platform-specific system APIs (like Windows PowerShell or Linux notify-send), use helper functions that build the command structure without execution. This allows CI to verify cross-platform compatibility on Linux-only containers:
|
|
|
|
```rust
|
|
/// Build notify-send command for testing (doesn't execute)
|
|
#[cfg(test)]
|
|
fn build_notify_send_command(title: &str, body: &str) -> (String, Vec<String>) {
|
|
(
|
|
"notify-send".to_string(),
|
|
vec![
|
|
title.to_string(),
|
|
body.to_string(),
|
|
"--urgency=normal".to_string(),
|
|
"--app-name=Hikari Desktop".to_string(),
|
|
],
|
|
)
|
|
}
|
|
|
|
#[test]
|
|
fn test_e2e_notify_send_command_structure() {
|
|
let (command, args) = build_notify_send_command("Test Title", "Test Body");
|
|
|
|
assert_eq!(command, "notify-send");
|
|
assert_eq!(args.len(), 4);
|
|
assert_eq!(args[0], "Test Title");
|
|
assert_eq!(args[1], "Test Body");
|
|
}
|
|
```
|
|
|
|
This approach:
|
|
|
|
- Verifies command structure, argument order, and escaping logic
|
|
- Tests cross-platform code paths without requiring the target platform
|
|
- Allows CI to catch regressions in Windows-specific code whilst running on Linux
|
|
- Keeps tests fast and deterministic (no actual system calls)
|
|
|
|
### Example Test Structure
|
|
|
|
```typescript
|
|
import { describe, it, expect } from "vitest";
|
|
|
|
describe("FeatureName", () => {
|
|
it("handles the normal case correctly", () => {
|
|
// Arrange
|
|
const input = "test data";
|
|
|
|
// Act
|
|
const result = functionUnderTest(input);
|
|
|
|
// Assert
|
|
expect(result).toBe("expected output");
|
|
});
|
|
|
|
it("handles edge cases gracefully", () => {
|
|
// Test edge cases...
|
|
});
|
|
});
|
|
```
|
|
|
|
### Adding Tests for New Features
|
|
|
|
When developing new features, always add corresponding tests:
|
|
|
|
1. **Before implementing**: Consider what needs testing (happy path, edge cases, errors)
|
|
2. **During implementation**: Write tests alongside the code
|
|
3. **After implementation**: Run `pnpm test:coverage` to verify coverage remains high
|
|
4. **Before committing**: Ensure `check-all.sh` passes (includes all tests)
|
|
|
|
The goal is to maintain our near-100% coverage as the codebase grows, so future refactoring and changes can be made with confidence!
|
|
|
|
## Project Context
|
|
|
|
Hikari Desktop is a Tauri-based desktop application that wraps Claude Code with a visual anime character (Hikari) who appears on screen. This is a personal project where Hikari can sign her work and act as herself!
|