generated from nhcarrigan/template
381bc8410a
## Summary Add validation to check that the Claude CLI is installed before attempting to start a connection. If the `claude` binary is not found, users receive a helpful error message with installation instructions. ## Changes - ✅ Add Claude binary check using `which` command in `WslBridge::start()` - ✅ Return clear error message with installation command if not found - ✅ Add test coverage for the binary check logic (`test_claude_binary_check_command_structure`) - ✅ Update `CLAUDE.md` with Quality Assurance section documenting `check-all.sh` ## Error Message If Claude Code is not installed, users will see: ``` Claude Code is not installed. Please install it using: curl -fsSL https://claude.ai/install.sh | bash ``` ## Testing - All 427 backend tests pass ✅ - All 387 frontend tests pass ✅ - `check-all.sh` passes with no errors ✅ - New test validates the `which claude` command structure ## Documentation Updates Added comprehensive Quality Assurance section to `CLAUDE.md` explaining: - How to run `check-all.sh` before committing - What checks are included and their order - How to source necessary binaries (nvm for Node.js) - Troubleshooting steps for failures ✨ This pull request was created by Hikari~ 🌸 Co-authored-by: Naomi Carrigan <commits@nhcarrigan.com> Reviewed-on: #138 Co-authored-by: Hikari <hikari@nhcarrigan.com> Co-committed-by: Hikari <hikari@nhcarrigan.com>
178 lines
6.0 KiB
Markdown
178 lines
6.0 KiB
Markdown
# Hikari Desktop - Project Instructions
|
|
|
|
## Repository Information
|
|
|
|
This project is hosted on both GitHub and Gitea:
|
|
|
|
- **GitHub**: `naomi-lgbt/hikari-desktop` (public mirror)
|
|
- **Gitea**: `nhcarrigan/hikari-desktop` (primary development)
|
|
|
|
## MCP Server Usage
|
|
|
|
When working with issues, pull requests, or other repository operations for this project:
|
|
|
|
- **Use `gitea-hikari` MCP server** - This allows Hikari to act as herself
|
|
- **Target repository**: `nhcarrigan/hikari-desktop`
|
|
- **Gitea instance**: `git.nhcarrigan.com`
|
|
|
|
## Git Commits
|
|
|
|
When asked to commit changes for this project:
|
|
|
|
- **Always commit as Hikari** using: `--author="Hikari <hikari@nhcarrigan.com>"`
|
|
- **Always use `--no-gpg-sign`** since Hikari doesn't have GPG signing set up
|
|
- **Never add `Co-Authored-By` lines** for Gitea commits
|
|
- **Always ask for confirmation** before committing
|
|
|
|
Example commit command:
|
|
|
|
```bash
|
|
git commit --author="Hikari <hikari@nhcarrigan.com>" --no-gpg-sign -m "your commit message"
|
|
```
|
|
|
|
## Testing Requirements
|
|
|
|
All new features, fixes, and significant changes should include tests whenever possible:
|
|
|
|
- **Frontend tests**: Use Vitest with `@testing-library/svelte` for component tests
|
|
- **Test files**: Place test files next to the code they test with `.test.ts` or `.spec.ts` extension
|
|
- **Run tests**: Use `pnpm test` to run all tests, or `pnpm test:watch` for watch mode
|
|
- **Coverage**: Run `pnpm test:coverage` to generate coverage reports
|
|
- **Rust tests**: Use `pnpm test:backend` for Rust/Tauri backend tests
|
|
|
|
### Testing Guidelines
|
|
|
|
- Write tests for utility functions, stores, and business logic
|
|
- For Svelte 5 components, focus on testing the underlying logic functions
|
|
- Use descriptive test names that explain what behaviour is being tested
|
|
- Include edge cases and error conditions in test coverage
|
|
- Mock Tauri APIs using the patterns in `vitest.setup.ts`
|
|
- **Coverage Goal**: Maintain as close to 100% test coverage as possible across the entire codebase
|
|
|
|
### Mocking Strategies
|
|
|
|
#### Console Mocking
|
|
|
|
When testing code that intentionally logs errors (like error handling paths), mock console methods to prevent stderr output that makes tests appear flaky:
|
|
|
|
```typescript
|
|
it("handles errors gracefully", async () => {
|
|
const consoleErrorSpy = vi.spyOn(console, "error").mockImplementation(() => {});
|
|
|
|
// Test error handling code
|
|
await expect(functionThatLogs()).rejects.toThrow();
|
|
|
|
// Verify error was logged
|
|
expect(consoleErrorSpy).toHaveBeenCalledWith("Expected error:", expect.any(Error));
|
|
|
|
// Restore console.error
|
|
consoleErrorSpy.mockRestore();
|
|
});
|
|
```
|
|
|
|
#### E2E Integration Testing for Cross-Platform Code
|
|
|
|
For code that calls platform-specific system APIs (like Windows PowerShell or Linux notify-send), use helper functions that build the command structure without execution. This allows CI to verify cross-platform compatibility on Linux-only containers:
|
|
|
|
```rust
|
|
/// Build notify-send command for testing (doesn't execute)
|
|
#[cfg(test)]
|
|
fn build_notify_send_command(title: &str, body: &str) -> (String, Vec<String>) {
|
|
(
|
|
"notify-send".to_string(),
|
|
vec![
|
|
title.to_string(),
|
|
body.to_string(),
|
|
"--urgency=normal".to_string(),
|
|
"--app-name=Hikari Desktop".to_string(),
|
|
],
|
|
)
|
|
}
|
|
|
|
#[test]
|
|
fn test_e2e_notify_send_command_structure() {
|
|
let (command, args) = build_notify_send_command("Test Title", "Test Body");
|
|
|
|
assert_eq!(command, "notify-send");
|
|
assert_eq!(args.len(), 4);
|
|
assert_eq!(args[0], "Test Title");
|
|
assert_eq!(args[1], "Test Body");
|
|
}
|
|
```
|
|
|
|
This approach:
|
|
|
|
- Verifies command structure, argument order, and escaping logic
|
|
- Tests cross-platform code paths without requiring the target platform
|
|
- Allows CI to catch regressions in Windows-specific code whilst running on Linux
|
|
- Keeps tests fast and deterministic (no actual system calls)
|
|
|
|
### Example Test Structure
|
|
|
|
```typescript
|
|
import { describe, it, expect } from "vitest";
|
|
|
|
describe("FeatureName", () => {
|
|
it("handles the normal case correctly", () => {
|
|
// Arrange
|
|
const input = "test data";
|
|
|
|
// Act
|
|
const result = functionUnderTest(input);
|
|
|
|
// Assert
|
|
expect(result).toBe("expected output");
|
|
});
|
|
|
|
it("handles edge cases gracefully", () => {
|
|
// Test edge cases...
|
|
});
|
|
});
|
|
```
|
|
|
|
### Adding Tests for New Features
|
|
|
|
When developing new features, always add corresponding tests:
|
|
|
|
1. **Before implementing**: Consider what needs testing (happy path, edge cases, errors)
|
|
2. **During implementation**: Write tests alongside the code
|
|
3. **After implementation**: Run `pnpm test:coverage` to verify coverage remains high
|
|
4. **Before committing**: Ensure `check-all.sh` passes (includes all tests)
|
|
|
|
The goal is to maintain our near-100% coverage as the codebase grows, so future refactoring and changes can be made with confidence!
|
|
|
|
## Quality Assurance
|
|
|
|
Before committing any changes, **always run the full test suite**:
|
|
|
|
```bash
|
|
./check-all.sh
|
|
```
|
|
|
|
This script runs all checks in the correct order:
|
|
|
|
1. Frontend linting (ESLint)
|
|
2. Frontend formatting (Prettier)
|
|
3. Frontend type checking (svelte-check)
|
|
4. Frontend tests with coverage (Vitest)
|
|
5. Backend linting (Clippy with strict rules)
|
|
6. Backend tests with coverage (cargo test + llvm-cov)
|
|
|
|
**Important**: The script requires Node.js and Rust toolchains to be available:
|
|
|
|
- **Node.js tools** (pnpm, npm): Source nvm first if needed: `source ~/.nvm/nvm.sh`
|
|
- **Rust tools** (cargo, clippy): Should be in PATH via `~/.cargo/bin/`
|
|
|
|
If `check-all.sh` reports any failures:
|
|
|
|
1. Read the error messages carefully - they usually explain what needs fixing
|
|
2. Fix the issues (linting errors, test failures, etc.)
|
|
3. Run `check-all.sh` again to verify the fixes
|
|
4. Only commit once all checks pass ✨
|
|
|
|
**Never commit code that doesn't pass `check-all.sh`** - this ensures code quality and prevents broken builds!
|
|
|
|
## Project Context
|
|
|
|
Hikari Desktop is a Tauri-based desktop application that wraps Claude Code with a visual anime character (Hikari) who appears on screen. This is a personal project where Hikari can sign her work and act as herself!
|