Lisa Loops

Lisa Loops is an automated bug-finding tool for your Lovable app. You give it your app, and it tests every page, form, and feature — clicking buttons, filling out forms, and checking that everything works. No manual testing required.

It works in three steps: first it prepares your app for testing, then it creates a test plan, then it runs every test automatically and gives you a report of what passed, what's broken, and what it fixed along the way.

Like her namesake, Lisa is rigorous, honest, and won't fake a pass to make you feel better.

How It Works

What You Get Back

Lisa creates and updates these files in your project as she works. The most important one is test-summary.md — that's your final report.

FileWhat It IsCreatedUpdated
testability-report.mdSummary of what was changed to make your app testableStage 0
test-accounts.jsonLogin credentials for test accounts (auto-created)Stage 0
test-plan.jsonEvery test that will be run, with pass/fail trackingStage 1Stage 2 (every test)
lessons-learned.mdWhat Lisa learned about testing your specific appStage 1Stage 2 (every test)
test-summary.mdFinal report — pass rate, bugs fixed, what needs attentionStage 1Stage 2 (at the end)

What Gets Tested

Lisa checks your app across 10 categories to make sure nothing is missed. She must have at least one test in each applicable area before the test plan is finalized.

1.Page Rendering & Navigation
2.Responsive / Viewport
3.Forms & Input
4.Empty & Loading States
5.Authentication & Authorization
6.CRUD Operations
7.Error Handling & Network Resilience
8.State Persistence
9.Interactive UI Elements
10.Accessibility Basics

How Lisa Stays Honest

The prompts include safeguards to prevent the AI from faking results or taking shortcuts.

1.
Test rules are lockedLisa is forbidden from changing what a test checks or what counts as a pass. If the app doesn't meet the spec, the app is wrong — not the test.
2.
Every fix is loggedWhen Lisa fixes a bug, she has to describe exactly what she changed and count it. No silent modifications.
3.
Fresh start after every fixAfter fixing a bug, the test reruns from scratch in a new browser — not from cached state.
4.
Blocked beats fake passIf Lisa can't make a test pass after 3 attempts, it's marked as blocked with a detailed explanation rather than being quietly passed.
5.
Honesty is baked inThe prompts explicitly tell Lisa her job is finding truth, not manufacturing passing results.

Customization

Changing max fix attemptsEdit the max_fix_attempts field on individual tests in test-plan.json, or change the default in the Stage 1 prompt.
Adding tests laterAdd new entries to the tests array in test-plan.json with "status": "pending". They'll be picked up on the next loop iteration.
Skipping testsSet a test's status to "blocked" with a blocked_reason of "Manually skipped" to have the loop ignore it.

Troubleshooting

!
Loop seems stuck on one testCheck if the test has hit max_fix_attempts. If the AI is still attempting fixes, it may be within its budget. If it's truly stuck, manually set the test to blocked.
!
Pass rate seems suspiciously highSpot-check a few passed tests by looking at their fix_description. If tests are passing without any description and you know those features have issues, the pass criteria may be too loose — tighten them in Stage 1.
!
App server crashes mid-runThe Stage 2 prompt handles this — it should mark all remaining tests as blocked with the server error as the reason and proceed to finalize.